블로그 이미지
LifeisSimple

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30

Notice

2017. 3. 20. 11:39 Brain Trainning/DataBase

출처 : https://blogs.msdn.microsoft.com/psssql/2016/10/04/default-auto-statistics-update-threshold-change-for-sql-server-2016/

Default auto statistics update threshold change for SQL Server 2016



Lately, we had a customer who contacted us for a performance issue where their server performed much worse in SQL Server 2016 following upgrade.  To show us as an example, he even captured a video.  In the video, he showed that the session that was compiling the query had multiple threads waiting on LATCH_EX of ACCESS_METHODS_DATASET_PARENT.  This type of latch is used to synchronize dataset access among parallel threads.  In general, it deals with large amount of data.   Below is a screenshot from the video. Note that I didn’t include complete columns because I don’t want to reveal customer’s database and user names.   This is very puzzling because we should not see parallel threads during true phases of compiling.

 

image

 

 

After staring at it for a moment, we started to realize that this must have something to do with auto update statistics.  Fortunately, we have a copy of pssdiag captured that include trace data.  To prove that auto update statistics could have caused the issue, we needed to find some evidence of long running auto update stats event.  After importing the data, we were able to find some auto update stats took more than 2 minutes.  These stats update occurred to the queries customer pointed out.  Below is an example of auto update in profiler trace extracted from customer’s data collection.

 

 

image

 

 

Root cause & SQL Server 2016 change

This turned out to be the default auto stats threshold change in SQL 2016.

KB Controlling Autostat (AUTO_UPDATE_STATISTICS) behavior in SQL Server documents two thresholds.  I will call them old threshold and new threshold.

Old threshold: it takes 20% of row changes before auto update stats kicks (there are some tweaks for small tables, for large tables, 20% change is needed).  For a table with 100 million rows, it requires 20 million row change for auto stats to kick in. For vast majority of large tables, auto stats basically doesn’t do much.

New threshold: Starting SQL 2008 R2 SP1, we introduced a trace flag 2371 to control auto update statistics better (new threshold).  Under trace flag 2371, percentage of changes requires is dramatically reduced with large tables.  In other words, trace flag 2371 can cause more frequent update.  This new threshold is off by default and is enabled by the trace flag.  But in SQL 2016, this new threshold is enabled by default for a database with compatibility level 130.

In short:

SQL Server 2014 or below: default is the old threshold.  You can use trace flag 2371 to activate new threshold

SQL Server 2016:  Default is new threshold if database compatibility level is 130.  If database compatibility  is below 130, old threshold is used (unless you use trace flag 2371)

Customer very frequently ‘merge’ data into some big tables. some of them had 300 million rows.  The process triggered much more frequent stats update now because of the threshold change for the large tables.  

 

Solution

The solution is to enable asynchronous statistics update.  After customer implemented this approach, their server performance went back to old level.

 

image

 

Demo of auto stats threshold change


–setup a table and insert 100 million rows
drop database testautostats
go
create database testautostats
go
use testautostats
go
create table t (c1 int)
go
set nocount on
declare @i int
set @i = 0
begin tran
while @i < 100000000
begin
declare @rand int = rand() * 1000000000
    if (@i % 100000 = 0)
    begin
        while @@trancount > 0     commit tran
        begin tran
    end
    insert into t values (@rand)
    set @i  = @i + 1
end
commit tran

go
create index ix on t (c1)
go

 

 

–run this query and query stats property 
–note the last_updated column
select count (*) from t join sys.objects o on t.c1=o.object_id
go
select * from sys.stats st cross apply sys.dm_db_stats_properties (object_id, stats_id) 
where st.object_id = object_id (‘t’)

image

 

–delete 1 million row
–run the same query and query stats property
–note that last_updated column changed
delete top (1000000) from t
go
select count (*) from t join sys.objects o on t.c1=o.object_id

go
select * from sys.stats st cross apply sys.dm_db_stats_properties (object_id, stats_id) 
where st.object_id = object_id (‘t’)

 

image

 

–now switch DB compt level to 120
–delete 1 million row
–note that stats wasn’t updated (last_updated column stays the same)
alter database testautostats SET COMPATIBILITY_LEVEL=120
go
delete top (1000000) from t
go
select * from sys.stats st cross apply sys.dm_db_stats_properties (object_id, stats_id) 
where st.object_id = object_id (‘t’)

 

image


추가 : http://www.sqlservergeeks.com/sql-server-trace-flag-2371-to-control-auto-update-statistics-threshold-and-behavior-in-sql-server/


posted by LifeisSimple
2016. 7. 6. 16:17 Brain Trainning/DataBase

Understanding how SQL Server stores data in data files


By:    |   Read Comments   |   Related Tips: More > Database Administration


출처 : https://www.mssqltips.com/sqlservertip/4345/understanding-how-sql-server-stores-data-in-data-files/


Problem

Have you ever thought about how SQL Server stores data in its data files? As you know, data in tables is stored in row and column format at the logical level, but physically it stores data in data pages which are allocated from the data files of the database. In this tip I will show how pages are allocated to data files and what happens when there are multiple data files for a SQL Server database. 

Solution

Every SQL Server database has at least two operating system files: a data file and a log file. Data files can be of two types: Primary or Secondary.  The Primary data file contains startup information for the database and points to other files in the database. User data and objects can be stored in this file and every database has one primary data file. Secondary data files are optional and can be used to spread data across multiple files/disks by putting each file on a different disk drive. SQL Server databases can have multiple data and log files, but only one primary data file. Above these operating system files, there are Filegroups. Filegroups work as a logical container for the data files and a filegroup can have multiple data files.

The disk space allocated to a data file is logically divided into pages which is the fundamental unit of data storage in SQL Server. A database page is an 8 KB chunk of data. When you insert any data into a SQL Server database, it saves the data to a series of 8 KB pages inside the data file. If multiple data files exist within a filegroup, SQL Server allocates pages to all data files based on a round-robin mechanism. So if we insert data into a table, SQL Server allocates pages first to data file 1, then allocates to data file 2, and so on, then back to data file 1 again. SQL Server achieves this by an algorithm known as Proportional Fill.

The proportional fill algorithm is used when allocating pages, so all data files allocate space around the same time. This algorithm determines the amount of information that should be written to each of the data files in a multi-file filegroup based on the proportion of free space within each file, which allows the files to become full at approximately the same time. Proportional fill works based on the free space within a file.

Analyzing How SQL Server Data is Stored

Step 1: First we will create a database named "Manvendra" with three data files (1 primary and 2 secondary data files) and one log file by running the below T-SQL code. You can change the name of the database, file path, file names, size and file growth according to your needs.

CREATE DATABASE [Manvendra]
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'Manvendra', FILENAME = N'C:\MSSQL\DATA\Manvendra.mdf',SIZE = 5MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB ),
( NAME = N'Manvendra_1', FILENAME = N'C:\MSSQL\DATA\Manvendra_1.ndf',SIZE = 5MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB ),
( NAME = N'Manvendra_2', FILENAME = N'C:\MSSQL\DATA\Manvendra_2.ndf' ,SIZE = 5MB , MAXSIZE = UNLIMITED, FILEGROWTH = 10MB )
 LOG ON
( NAME = N'Manvendra_log', FILENAME = N'C:\MSSQL\DATA\Manvendra_log.ldf',SIZE = 10MB , MAXSIZE = 1GB , FILEGROWTH = 10%)
GO

Step 2: Now we can check the available free space in each data file of this database to track the sequence of page allocations to the data files. There are multiple ways to check such information and below is one option. Run the below command to check free space in each data file.

USE Manvendra
GO
Select DB_NAME() AS [DatabaseName], Name, file_id, physical_name,
    (size * 8.0/1024) as Size,
    ((size * 8.0/1024) - (FILEPROPERTY(name, 'SpaceUsed') * 8.0/1024)) As FreeSpace
    From sys.database_files

You can see the data file names, file IDs, physical name, total size and available free space in each of the database files.

data file free spaces post SQL Server database creation

We can also check how many Extents are allocated for this database. We will run the below DBCC command to get this information. Although this is undocumented DBCC command this can be very useful information.

USE Manvendra
GO
DBCC showfilestats

With this command we can see the number of Extents for each data file. As you may know, the size of each data page is 8KB and eight continuous pages equals one extent, so the size of an extent would be approximately 64KB. We created each data file with a size of 5 MB, so the total number of available extents would be 80 which is shown in column TotalExtents, we can get this by (5*1024)/64.

UsedExtents is the number of extents allocated with data. As I mentioned above, the primary data file includes system information about the database, so this is why this file has a higher number of UsedExtents.

used extents post SQL Server database creation

Step 3: The next step is to create a table in which we will insert data. Run the below command to create a table. Once the table is created we will run both commands again which we ran in step 2 to get the details of free space and used/allocated extents.

USE Manvendra;
GO
CREATE TABLE [Test_Data] (
    [Sr.No] INT IDENTITY,
    [Date] DATETIME DEFAULT GETDATE (),
    [City] CHAR (25) DEFAULT 'Bangalore',
 [Name] CHAR (25) DEFAULT 'Manvendra Deo Singh');

Step 4: Check the allocated pages and free space available in each data file by running same commands from step 2. 

USE Manvendra
Go
Select DB_NAME() AS [DatabaseName], Name, file_id, physical_name,
    (size * 8.0/1024) as Size,
    ((size * 8.0/1024) - (FILEPROPERTY(name, 'SpaceUsed') * 8.0/1024)) As FreeSpace
    From sys.database_files

You can see that there is no difference between this screenshot and the above screenshot except for a little difference in the FreeSpace for the transaction log file.

SQL Server databas file space post table creation

Now run the below DBCC command to check the allocated pages for each data file.

DBCC showfilestats

You can see the allocated pages of each data files has not changed.

used extents post SQL Server table creation

Step 5: Now we will insert some data into this table to fill each of the data files. Run the below command to insert 10,000 rows to table Test_Data.

USE Manvendra
go
INSERT INTO Test_DATA DEFAULT VALUES;
GO 10000

Step 6: Once data is inserted we will check the available free space in each data file and the total allocated pages of each data file.

USE Manvendra
Go
Select DB_NAME() AS [DatabaseName], Name, file_id, physical_name,
    (size * 8.0/1024) as Size,
    ((size * 8.0/1024) - (FILEPROPERTY(name, 'SpaceUsed') * 8.0/1024)) As FreeSpace
    From sys.database_files

You can see the difference between the screenshot below and the above screenshot. Free space in each data file has been reduced and the same amount of space has been allocated from both of the secondary data files, because both files have the same amount of free space and proportional fill works based on the free space within a file. 

SQL Server Database File space post data insert

Now run below DBCC command to check the allocated pages for each data files.

DBCC showfilestats

You can see a few more pages have been allocated for each data file. Now the primary data file has 41 extents and the secondary data files have a total of 10 extents, so total data saved so far is 51 extents. Both secondary data files have the same number of extents allocated which proves the proportional fill algorithm.

used extents post SQL Server data insert

Step 7: We can also see where data is stored for table "Test_Data" for each data file by running the below DBCC command. This will let us know that data is stored on all data files.

DBCC IND ('Manvendra', 'Test_data', -1);

I attached two screenshots because the number of rows was very large to show all data file IDs where data has been stored. File IDs are shown in each screenshot, so we can see each data page and their respective file ID. From this we can say that table Test_data is saved on all three data files as shown in the following screenshots.

SQL Server data table saved on which data files



Data table saved on particular SQL Server data files

Step 8: We will repeat the same exercise again to check space allocation for each data file. Insert an additional 10,000 rows to the same table Test_Data to check and validate the page allocation for each data file. Run the same command which we ran in step 5 to insert 10,000 more rows to the table test_data. Once the rows have been inserted, check the free space and allocated extents for each data file.

USE Manvendra
GO
INSERT INTO Test_DATA DEFAULT VALUES;
GO 10000
Select DB_NAME() AS [DatabaseName], Name, file_id, physical_name,
    (size * 8.0/1024) as Size,
    ((size * 8.0/1024) - (FILEPROPERTY(name, 'SpaceUsed') * 8.0/1024)) As FreeSpace
    From sys.database_files

We can see again both secondary data files have the same amount of free space and similar amount of space has been allocated from the primary data file as well. This means SQL Server uses a proportional fill algorithm to fill data in to the data files.

SQL Server database file space post data insert

We can get the extent information again for the data files.

DBCC showfilestats

Again we can see in increase in the UsedExtents for all three of the data files.

used extents post SQL Server data insert
Next Steps
  • Create a test database and follow these steps, so you can better understand how SQL Server stores data at a physical and logical level. 
  • Explore more knowledge with SQL Server Database Administration Tips


posted by LifeisSimple
2016. 6. 22. 09:57 Brain Trainning/DataBase



Graphing MySQL performance with Prometheus and Grafana


출처 : https://www.percona.com/blog/2016/02/29/graphing-mysql-performance-with-prometheus-and-grafana/


   | February 29, 2016 |  Posted In: MonitoringMySQLPrometheus

This post explains how you can quickly start using such trending tools as Prometheus and Grafana for monitoring and graphing of MySQL and system performance.

First of all, let me mention that Percona Monitoring and Management beta has been released recently which is an easy way you can get all of this.

I will try to keep this blog as short as possible, so you can quickly set things up before getting bored. I plan to cover the details in the next few posts. I am going to go through the installation process here in order to get some really useful and good-looking graphs in the end.

Overview

PrometheusPrometheus is an open-source service monitoring system and time series database. In short, the quite efficient daemon scrapes metrics from remote machines using HTTP protocol and stores data in the local time-series database. Prometheus provides a simple web interface, a very powerful query language, HTTP API etc. However, the storage is not designed to be durable for the time being.

The remote machines need to run exporters to expose metrics to Prometheus. We will be using the following two:

GrafanaGrafana is an open source, feature-rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus and InfluxDB. It is a powerful tool for visualizing large-scale measurement data and designed to work with time-series. Grafana supports different types of graphs, allows for custom representation of individual metrics on the graph and various methods of authentication including LDAP.

Diagram

Here is a diagram of the setup we are going to use:
Prometheus + Grafana diagram

Prometheus setup

To install on the monitor host.

Get the latest tarball from Github.

Create a simple config:

where 192.168.56.107 is the IP address of the db host we are going to monitor and db1 is its short name. Note, the “alias” label is important here because we rely on it in the predefined dashboards below to get per host graphs.

Start Prometheus in foreground:

Now we can access Prometheus’ built-in web interface by http://monitor_host:9090

Prometheus web interface
If you look at the Status page from the top menu, you will see that our monitoring targets are down so far. Now let’s setup them – prometheus exporters.

Prometheus exporters setup

Install on the db host. Of course, you can use the same monitor host for the experiment. Obviously, this node must run MySQL.

Download exporters from here and there.

Start node_exporter in foreground:

Unlike node_exporter, mysqld_exporter wants MySQL credentials. Those privileges should be sufficient:

Create .my.cnf and start mysqld_exporter in foreground:

At this point we should see our endpoints are up and running on the Prometheus Status page:
Prometheus status page

Grafana setup

Install on the monitor host.

Grafana has RPM and DEB packages. The installation is as simple as installing one package.
RPM-based system:

or APT-based one:

Open and edit the last section of /etc/grafana/grafana.ini resulting in the following ending:

Percona has built the predefined dashboards for Grafana with Prometheus for you.

Let’s get them deployed:

It is important to apply the following minor patch on Grafana 2.6 in order to use the interval template variable to get the good zoomable graphs. The fix is simply to allow variable in Step field on Grafana graph editor page. For more information, take a look at PR#3757 and PR#4257. We hope the last one will be released with the next Grafana version.

Those changes are idempotent.

Finally, start Grafana:

At this point, we are one step before being done. Login into Grafana web interface http://monitor_host:3000 (admin/admin).

Go to Data Sources and add one for Prometheus:
Grafana datasource

Now check out the dashboards and graphs. Say choose “System Overview” and period “Last 5 minutes” on top-right. You should see something similar:
Grafana screen
If your graphs are not populating ensure the system time is correct on the monitor host.

Samples

Here are some real-world samples (images are clickable and scrollable):
 
 
 
 

Enjoy!

Conclusion

Prometheus and Grafana is a great tandem for enabling monitoring and graphing capabilities for MySQL. The tools are pretty easy to deploy, they are designed for time series with high efficiency in mind. In the next blog posts I will talk more about technical aspects, problems and related stuff.


posted by LifeisSimple
2016. 6. 6. 23:22 Brain Trainning/DataBase


Graphing MySQL performance with Prometheus and Grafana

   | February 29, 2016 |  Posted In: MonitoringMySQLPrometheus

출처 : https://www.percona.com/blog/2016/02/29/graphing-mysql-performance-with-prometheus-and-grafana/

This post explains how you can quickly start using such trending tools as Prometheus and Grafana for monitoring and graphing of MySQL and system performance.

First of all, let me mention that Percona Monitoring and Management beta has been released recently which is an easy way you can get all of this.

I will try to keep this blog as short as possible, so you can quickly set things up before getting bored. I plan to cover the details in the next few posts. I am going to go through the installation process here in order to get some really useful and good-looking graphs in the end.

Overview

PrometheusPrometheus is an open-source service monitoring system and time series database. In short, the quite efficient daemon scrapes metrics from remote machines using HTTP protocol and stores data in the local time-series database. Prometheus provides a simple web interface, a very powerful query language, HTTP API etc. However, the storage is not designed to be durable for the time being.

The remote machines need to run exporters to expose metrics to Prometheus. We will be using the following two:

GrafanaGrafana is an open source, feature-rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus and InfluxDB. It is a powerful tool for visualizing large-scale measurement data and designed to work with time-series. Grafana supports different types of graphs, allows for custom representation of individual metrics on the graph and various methods of authentication including LDAP.

Diagram

Here is a diagram of the setup we are going to use:
Prometheus + Grafana diagram

Prometheus setup

To install on the monitor host.

Get the latest tarball from Github.

Create a simple config:

where 192.168.56.107 is the IP address of the db host we are going to monitor and db1 is its short name. Note, the “alias” label is important here because we rely on it in the predefined dashboards below to get per host graphs.

Start Prometheus in foreground:

Now we can access Prometheus’ built-in web interface by http://monitor_host:9090

Prometheus web interface
If you look at the Status page from the top menu, you will see that our monitoring targets are down so far. Now let’s setup them – prometheus exporters.

Prometheus exporters setup

Install on the db host. Of course, you can use the same monitor host for the experiment. Obviously, this node must run MySQL.

Download exporters from here and there.

Start node_exporter in foreground:

Unlike node_exporter, mysqld_exporter wants MySQL credentials. Those privileges should be sufficient:

Create .my.cnf and start mysqld_exporter in foreground:

At this point we should see our endpoints are up and running on the Prometheus Status page:
Prometheus status page

Grafana setup

Install on the monitor host.

Grafana has RPM and DEB packages. The installation is as simple as installing one package.
RPM-based system:

or APT-based one:

Open and edit the last section of /etc/grafana/grafana.ini resulting in the following ending:

Percona has built the predefined dashboards for Grafana with Prometheus for you.

Let’s get them deployed:

It is important to apply the following minor patch on Grafana 2.6 in order to use the interval template variable to get the good zoomable graphs. The fix is simply to allow variable in Step field on Grafana graph editor page. For more information, take a look at PR#3757 and PR#4257. We hope the last one will be released with the next Grafana version.

Those changes are idempotent.

Finally, start Grafana:

At this point, we are one step before being done. Login into Grafana web interface http://monitor_host:3000 (admin/admin).

Go to Data Sources and add one for Prometheus:
Grafana datasource

Now check out the dashboards and graphs. Say choose “System Overview” and period “Last 5 minutes” on top-right. You should see something similar:
Grafana screen
If your graphs are not populating ensure the system time is correct on the monitor host.

Samples

Here are some real-world samples (images are clickable and scrollable):
 
 
 
 

Enjoy!

Conclusion

Prometheus and Grafana is a great tandem for enabling monitoring and graphing capabilities for MySQL. The tools are pretty easy to deploy, they are designed for time series with high efficiency in mind. In the next blog posts I will talk more about technical aspects, problems and related stuff.


posted by LifeisSimple
2016. 6. 6. 22:11 Brain Trainning/DataBase

50 Important Queries in SQL Server


출처 : http://www.c-sharpcorner.com/article/50-important-queries-in-sql-server/


In this article I will explain some general purpose queries. I think each developer should have knowledge of these queries. These queries are not related to any specific topic of SQL. But knowledge of such queries can solve some complex tasks and may be used in many scenarios, so I decided to write an article on these queries.

Query 1: Retrieve List of All Database

  1. EXEC sp_helpdb  

Example:

Example

Query 2: Display Text of Stored Procedure, Trigger, View 

  1. exec sp_helptext @objname = 'Object_Name'  

Example:

Example

Query 3: Get All Stored Procedure Relate To Database

  1. SELECT DISTINCT o.name, o.xtype  
  2.   
  3. FROM syscomments c  
  4.   
  5. INNER JOIN sysobjects o ON c.id=o.id  
  6.   
  7. WHERE o.xtype='P'  

Example:

Example

To retrieve the View use “V” instead of “P” and for functions use “FN.

Query 4: Get All Stored Procedure Relate To Table

  1. SELECT DISTINCT o.name, o.xtype  
  2.   
  3. FROM syscomments c  
  4.   
  5. INNER JOIN sysobjects o ON c.id=o.id  
  6.   
  7. WHERE c.TEXT LIKE '%Table_Name%' AND o.xtype='P'  

Example:

Example

To retrieve the View use “V” instead of “P” and for functions use “FN.

Query 5: Rebuild All Index of Database

  1. EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"  
  2.   
  3. GO  
  4.   
  5. EXEC sp_updatestats  
  6.   
  7. GO  

Example:

Example

Query 6: Retrieve All dependencies of Stored Procedure: 

This query return all objects name that are using into stored procedure like table, user define function, another stored procedure.

Query:

  1. ;WITH stored_procedures AS (  
  2.   
  3. SELECT  
  4.   
  5. oo.name AS table_name,  
  6.   
  7. ROW_NUMBER() OVER(partition by o.name,oo.name ORDER BY o.name,oo.nameAS row  
  8.   
  9. FROM sysdepends d  
  10.   
  11. INNER JOIN sysobjects o ON o.id=d.id  
  12.   
  13. INNER JOIN sysobjects oo ON oo.id=d.depid  
  14.   
  15. WHERE o.xtype = 'P' AND o.name LIKE '%SP_NAme%' )  
  16.   
  17. SELECT Table_name FROM stored_procedures  
  18.   
  19. WHERE row = 1  

Example:

Example

Query 7: Find Byte Size Of All tables in database

  1. SELECT sob.name AS Table_Name,  
  2.   
  3. SUM(sys.length) AS [Size_Table(Bytes)]  
  4.   
  5. FROM sysobjects sob, syscolumns sys  
  6.   
  7. WHERE sob.xtype='u' AND sys.id=sob.id  
  8.   
  9. GROUP BY sob.name  

Example:

Example

Query 8: Get all table that don’t have identity column:

Query:

  1. SELECT  
  2.   
  3. TABLE_NAME FROM INFORMATION_SCHEMA.TABLES  
  4.   
  5. where  
  6.   
  7. Table_NAME NOT IN  
  8.   
  9. (  
  10.   
  11. SELECT DISTINCT c.TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS c  
  12.   
  13. INNER  
  14.   
  15. JOIN sys.identity_columns ic  
  16.   
  17. on  
  18.   
  19. (c.COLUMN_NAME=ic.NAME))  
  20.   
  21. AND  
  22.   
  23. TABLE_TYPE ='BASE TABLE'  

Example:

Example

Query 9: List of Primary Key and Foreign Key for Whole Database

  1. SELECT  
  2.   
  3. DISTINCT  
  4.   
  5. Constraint_Name AS [Constraint],  
  6.   
  7. Table_Schema AS [Schema],  
  8.   
  9. Table_Name AS [TableName] FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE  
  10.   
  11. GO  

Example:

Example

Query 10: List of Primary Key and Foreign Key for a particular table

  1. SELECT  
  2.   
  3. DISTINCT  
  4.   
  5. Constraint_Name AS [Constraint],  
  6.   
  7. Table_Schema AS [Schema],  
  8.   
  9. Table_Name AS [TableName] FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE  
  10.   
  11. WHERE INFORMATION_SCHEMA.KEY_COLUMN_USAGE.TABLE_NAME='Table_Name'  
  12.   
  13. GO  

Example:

Example

Query 11: RESEED Identity of all tables

  1. EXEC sp_MSForEachTable '  
  2.   
  3. IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1  
  4.   
  5. DBCC CHECKIDENT (''?'', RESEED, 0)  

Example:

Example

Query 12: List of tables with number of records

  1. CREATE TABLE #Tab  
  2.   
  3. (  
  4.   
  5. Table_Name [varchar](max),  
  6.   
  7. Total_Records int  
  8.   
  9. );  
  10.   
  11. EXEC sp_MSForEachTable @command1=' Insert Into #Tab(Table_Name, Total_Records) SELECT ''?'', COUNT(*) FROM ?'  
  12.   
  13. SELECT * FROM #Tab t ORDER BY t.Total_Records DESC;  
  14.   
  15. DROP TABLE #Tab;  

Example:

Example

Query 13: Get the version name of SQL Server

  1. SELECT @@VERSION AS Version_Name  

Example:

Example

Query 14: Get Current Language of SQL Server

  1. SELECT @@LANGUAGE AS Current_Language;  

Example:

Example
Query 15: Disable all constraints of a table

  1. ALTER TABLE Table_Name NOCHECK CONSTRAINT ALL  

Example:

Example

Query16: Disable all constraints of all tables

  1. EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'  

Example:

ExampleQuery 17: Get Current Language Id

  1. SELECT @@LANGID AS 'Language ID'  

Example:

Example

Query18: Get precision level used by decimal and numeric as current set in Server:

  1. SELECT @@MAX_PRECISION AS 'MAX_PRECISION'  

Example:

Example

Query 19: Return Server Name of SQL Server

  1. SELECT @@SERVERNAME AS 'Server_Name'  

Example:

Example

Query 20: Get name of register key under which SQL Server is running

  1. SELECT @@SERVICENAME AS 'Service_Name'  

 

Example:

Example

Query 21: Get Session Id of current user process

  1. SELECT @@SPID AS 'Session_Id'  

Example:

Example

Query22: Get Current Value of TEXTSIZE option

  1. SELECT @@TEXTSIZE AS 'Text_Size'  

Example:

Example

Query 23: Retrieve Free Space of Hard Disk

  1. EXEC master..xp_fixeddrives  

Example:

example

Query24: Disable a Particular Trigger

Syntax:

  1. ALTER TABLE Table_Name DISABLE TRIGGER Trigger_Name  

Example:

  1. ALTER TABLE Employee DISABLE TRIGGER TR_Insert_Salary  

Query 25: Enable a Particular Trigger

Syntax:

  1. ALTER TABLE Table_Name ENABLE TRIGGER Trigger_Name  

Example:

  1. ALTER TABLE Employee ENABLE TRIGGER TR_Insert_Salary  

Query 26: Disable All Trigger of a table

We can disable and enable all triggers of a table using previous query, but replacing the "ALL" instead of trigger name.

Syntax:

  1. ALTER TABLE Table_Name DISABLE TRIGGER ALL  

Example:

  1. ALTER TABLE Demo DISABLE TRIGGER ALL  

Query 27: Enable All Trigger of a table

  1. ALTER TABLE Table_Name ENABLE TRIGGER ALL  

Example:

  1. ALTER TABLE Demo ENABLE TRIGGER ALL  

Query 28: Disable All Trigger for database

Using sp_msforeachtable system stored procedure we enable and disable all triggers for a database.

Syntax:

  1. Use Database_Name  
  2.   
  3. Exec sp_msforeachtable "ALTER TABLE ? DISABLE TRIGGER all"  

Example:

example

Query29: Enable All Trigger for database

  1. Use Demo  
  2.   
  3. Exec sp_msforeachtable "ALTER TABLE ? ENABLE TRIGGER all"  

Example:

example

Query30: List of Stored procedure modified in last N days

  1. SELECT name,modify_date  
  2.   
  3. FROM sys.objects  
  4.   
  5. WHERE type='P'  
  6.   
  7. AND DATEDIFF(D,modify_date,GETDATE())< N  

Example:

example

Query31: List of Stored procedure created in last N days

  1. SELECT name,sys.objects.create_date  
  2.   
  3. FROM sys.objects  
  4.   
  5. WHERE type='P'  
  6.   
  7. AND DATEDIFF(D,sys.objects.create_date,GETDATE())< N  

Example:

Example

Query32: Recompile a stored procedure

  1. EXEC sp_recompile'Procedure_Name';  
  2.   
  3. GO  

Example:

Example

Query 33: Recompile all stored procedure on a table