Using SQL Sever 2008 Database Extended Properties to Automate Database Management Tasks

No Comments

Written by Luis Miguel – Originally published on June 12, 2011

Problem:
Like any manual process, adding users to databases and setting their permissions on more than a few databases is a tedious and error-prone endeavor. By creating a couple of “admin” stored procedures, a DBA can automate this process to the point of being able to execute this task in one fell swoop, at the query editor by typing a few words just once.

If you care for documentation [I do], SQL Server Extended Properties are very convenient when it comes to adding tags, or labels to objects. I particularly find very useful to mark databases with notes that describe their purpose or their level of security sensitivity, etc…

Once you have a standard for these documentation tags and you keep them updated they can be a huge helper for performing database administration tasks very effectively.

Here is a list of the components I’m using to build this solution. I’m keeping all these administrative objects in the (I hope this name is descriptive enough) UTILTY_ADMIN user database

1 – The table to keep a list of the databases that are currently online and have a some kind of Extended Property


USE utility_admin
CREATE TABLE extendedproperties
  (
     dbname   VARCHAR(255),
     ep_name  VARCHAR(128),
     ep_value VARCHAR(128)
  ) 

2 – The stored procedure that does the brunt of the work, querying the Extended Properties of each database and storing them in the UTILITY_ADMIN.dbo.ExtendedProperties table


USE utility_admin
GO
CREATE PROCEDURE Getextendedproperties @DatabaseName VARCHAR(255) = NULL
AS
  IF @DatabaseName IS NULL
    SET @DatabaseName = Db_name()
  DECLARE @SQLText NVARCHAR(2000)
  IF Object_id(N’tempdb.dbo.##temp_ExtendedProperties’) IS NOT NULL
    DROP TABLE ##temp_extendedproperties
  CREATE TABLE ##temp_extendedproperties
    (
       ep_name  VARCHAR(128),
       ep_value VARCHAR(128)
    )
  SET @SQLText =
‘INSERT INTO ##temp_ExtendedProperties SELECT CAST(name AS VARCHAR(128)), CAST(value AS VARCHAR(128)) FROM ’
+ @databasename +
‘.sys.fn_listextendedproperty(default, default, default, default, default, default, default)’
  -- That is 7 defaults
  EXECUTE(@SQLText) 

3 – An SQL Server Agent job that runs every hour so that the list of databases with extended properties stays up to date


TRUNCATE TABLE utility_admin.dbo.extendedproperties
EXEC MASTER.sys.Sp_msforeachdb
‘INSERT INTO utility_admin.dbo.ExtendedProperties EXEC utility_admin.dbo.GetExtendedProperties ?’ 

4 – UTILTIY_ADMIN.dbo.dbaManageUserRole – This is the stored procedure you are going to execute to actually add the user and set his/her permissions. It requires three parameters, the name of the user, the name of the extended property and the value of the extended property. I have hardwired the db_datareader role into this procedure but with a little bit of imagination you could also add the role as a parameter and make it even more flexible.


CREATE PROCEDURE Dbamanageuserrole @user     VARCHAR(64),
                                   @ep_name  VARCHAR(128),
                                   @ep_value VARCHAR(128)
AS
  DECLARE @dbname VARCHAR(128)
  DECLARE @adduser VARCHAR(512)
  DECLARE @addrole VARCHAR(512)
  DECLARE dbcursor CURSOR FOR
    SELECT dbname
    FROM   utility_admin.dbo.extendedproperties
    WHERE  ep_name = @ep_name
           AND ep_value = @ep_value
  OPEN dbcursor
  FETCH NEXT FROM dbcursor INTO @dbname
  WHILE @@FETCH_STATUS = 0
    BEGIN
        SET @adduser = ‘USE ’ + @dbname + ‘; CREATE USER ’ + @user +
                       ‘ FOR LOGIN ’
                       +
                       @user
        SET @addrole = ‘USE ’ + @dbname +
                       ‘; EXEC sp_addrolemember N”db_datareader”, ’ + @user
        EXECUTE(@adduser)
        EXECUTE(@addrole)
        FETCH NEXT FROM dbcursor INTO @dbname
    END
  CLOSE dbcursor
  DEALLOCATE dbcursor 

Now we are ready to enjoy the rewards of our hard work time and time again. Here is the command I used to add the user ‘canela’ as a ‘db_datareader’ to all the databases I have tagged with ‘application_’ ‘reporting_’


EXEC Dbamanageuserrole
  canela,
  application_,
  reporting_ 

Here is a picture of one of my lab servers showing a long list of dummy databases on the left pane and a shorter result list of the ones with extended properties on the bottom right side.

How to run/debug Visual Studio 2010 websites in IIS 7 instead of the default (Cassini) development web server

No Comments

Written by Luis Miguel – Originally published on May 25, 2011

Synopsis:

When you create an ASP .NET 4 website in Visual Studio 2010 and do not modify any of its default settings as you run or debug this website from the IDE it starts and uses a special web server called Cassini, it does not run in the full blown, professional grade Windows web server (IIS 7)

Microsoft had many reasons to ship this special web server and have it run by default when Visual Studio and Visual Web Developer start a debug session on a project, but I do see a couple of issues with this:

1st – In the real world applications use IIS 7 not Cassinni, so I want to run/debug/test in IIS 7 not Cassini

2nd – Cassini is only available while IDE is running, and it shuts down when you close Visual Studio. I want to be able to use the website and test it at any time, directly from any local browser or remotely from other workstations over the network, without having to keep a Visual Web Developer session open.

These two requirements seem very basic, I don’t feel comfortable testing a project that eventually will be deployed to production unless the site is proven to run stably on the web server that will be hosting it permanently, and I need the flexibility of an always up web server.

So, after a bit of research and some trial and error experimenting, I found the procedure listed below will satisfy this need.

1 – Create a file system folder (c:\websites\mysite
2 – Create a website in IIS 7 that points to the folder you created in step 1
3 – Create an ASP .NET 4.0 website project in Visual Studio 2010 and use the folder you created in step 1 as the target folder
4 – Save the project
5 – Modify the URL in Website Start…

From now on in order to run the website from the IDE in debug mode you need to open Visual Studio with the command “Run as Administrator”

Or use the menu Debug – Attach to Process

6 – Debug the project in Visual Studio 2010 or try to browse the site in a browser window

If you receive the error message “Unrecognized attribute ‘targetFramework’”, make sure the application pool for the website “mysite” in ISS 7 is set to support the 4.0 .NET Framework

Follow the next steps if you want to go a bit further and test the data access layer

  1. Add a database connection to your project
  2. Add a database control to the page Default.aspx
  3. Save the project
  4. Debug the project in Visual Studio 2010 or try to browse the site in a browser window

When I built the test for the data access layer on the 2nd environment [specs listed below] I ran into this problem:

‘Handler “PageHandlerFactory-Integrated” has a bad module “ManagePipelineHandler” in its module list’

To resolve it I ran the command
c:\windows\Microsoft.NET\Framework\v4.030319\aspnet_regiis.exe –i

There are some explanations about this issue on the internet and even more than one way (command) to resolve it. I know I installed SQL Server 2008 Service Pack 1 after the .NET framework and ASP .NET 4 so that could have been the problem but the command listed above took care of it in a couple of seconds.

This procedure applies to “WEBSITE” projects, if you created an ASP .NET “application” project, most of this work can be very easily done thru the menu.

Technologies used:

1st Environment
- Windows Server 2008 R2 Enterprise 64-bit
- SQL Server 2008 R2 Standard 64-bit
- Internet Information Service (IIS) 7
- Visual Studio Professional 2010

2nd environment
- Windows Server 2008 Standard 32-bit with Service Pack 2
- SQL Server 2008 Standard 32-bit with Service Pack 1
- Internet Information Services (IIS) 7
- Visual Web Developer Express 2010

Workaround to bypass the 1000 records limit, set by ADSI when querying Active Directory using SQL Server

1 Comment

Written by Luis Miguel – Originally published on January 29, 2011

Who can benefit from this post? – This post does not go into the details of how to build a solution to query Active Directory using SQL Server. Instead it presents a workaround for people that are already querying Active Directory using SQL Server and are hitting the 1000 records barrier imposed by ADSI.

As always, this is a short and sweet post so all the information needed to perform certain actions will not be included here. However the article should give you enough information to help you resolve the issue at hand fairly quickly.

Really QUICK-AND-DIRTY solution: If you have access to modify Active Directory settings, increase the value of MaxPageSize from its default of 1000 to something that suits your needs, BUT please read Avoid changing the MaxPageSize LDAP query policy to understand the possible consequences of this action, the author (unlike me) seems to know quite a bit about the topic.

For the rest of us – the ones that can only “view” Active Directory – read below.

The first time I had to work on a solution to query Active Directory using SQL Server I remember reading on the Internet a few posts talking about the 1000 records limit and at least one possible trick to bypass it. At the time it was not an issue for me, the AD implementation I was working with had less than 1000 users, so I didn’t pay attention to the specifics of the solution. I don’t know about you, but 99% percent of the time when I’m working on a project/task, there is barely enough time to get it out, let alone plan/build for “scalability”.

So here we are now with report consumers complaining about some user names missing from the “employee” list.

As I troubleshooted this issue, I noticed that there are a few user properties in AD that could break the dataset into chunks smaller than 1000 records, which I could then put together one at a time in a table and end up with the complete data set.

The property I choose to use was “whenCreated”, I could determine that if I added the filter “whenCreated < 20100101000000.0Z” to the WHERE clause of the query, Active Directory would return about 900 records. To get the rest of the data set (close to 300 records) I changed the query to whenCrated GREATER THAN 20100101000000.0Z

Here is a copy of the two complete queries:

INSERT INTO my_ad_users_list
SELECT givenname                  AS firstname,
       sn                         AS lastname,
       initials,
       displayname,
       samaccountname             AS ACCOUNT,
       telephonenumber,
       mobile,
       facsimiletelephonenumber   AS fax,
       pager,
       mail                       AS emailaddress,
       department,
       title,
       l                          AS city,
       postalcode,
       ipphone                    AS extension,
       physicaldeliveryofficename AS office,
       ‘my_domain’                AS domain,
       ‘regular’                  AS usertype
FROM   Openquery (adsi, ‘SELECT givenname, sn, initials, displayname, sAMAccountName, telephoneNumber,
 mobile, facsimileTelephoneNumber, pager, mail, department, title, l, PostalCode, ipPhone, physicalDeliveryOfficeName

  FROM ”LDAP://ad_server_name.my_domain.my_company.corp”

  WHERE userAccountControl=512 AND objectCategory = ”Person” AND objectClass = ”user”
 AND whenCreated < ''20100101000000.0Z'''
)
WHERE  givenname IS NOT NULL
       AND sn IS NOT NULL

INSERT INTO my_ad_users_list
SELECT givenname                  AS firstname,
       sn                         AS lastname,
       initials,
       displayname,
       samaccountname             AS ACCOUNT,
       telephonenumber,
       mobile,
       facsimiletelephonenumber   AS fax,
       pager,
       mail                       AS emailaddress,
       department,
       title,
       l                          AS city,
       postalcode,
       ipphone                    AS extension,
       physicaldeliveryofficename AS office,
       ‘my_domain’                AS domain,
       ‘regular’                  AS usertype
FROM   Openquery (adsi, ‘SELECT givenname, sn, initials, displayname, sAMAccountName, telephoneNumber,
 mobile, facsimileTelephoneNumber, pager, mail, department, title, l, PostalCode, ipPhone, physicalDeliveryOfficeName

  FROM ”LDAP://ad_server_name.my_domain.my_company.corp”

  WHERE userAccountControl=512 AND objectCategory = ”Person” AND objectClass = ”user”
 AND whenCreated > ”20100101000000.0Z”’
)
WHERE  givenname IS NOT NULL
       AND sn IS NOT NULL 

Tools used

-          SQL Server 2005

-          ADExplorer

Never trust auto generated tables

No Comments

Written by Luis Miguel – Originally published on November 14, 2010.

There is a huge difference between numeric data types. Without going into the details of how much memory they use or any other technicalities that could take a whole chapter in a book, let’s just say that you can, on purpose or accidentally, use a larger than necessary numeric type For example Decimal, to store Integers, and your program will not crash and everything will “just work”.

But will it work efficiently? Could this be considered an optimized design? Not at all.

Consider this scenario

1 – You use an SSIS package to import data from Oracle into SQL Server and you tell the OLE DB Destination in the Data Flow Task to create new tables for you automatically
2 – You know the application logic and you know that certain fields will never hold a value greater than 1000. What you don’t know is that these fields have been stored in Oracle using the Number(15,0) data type and SSIS set them in the destination table as Numeric(15,0)
3 – You create a join between the two imported tables and start manipulating the data. As you run the update queries you notice that for certain logic using certain fields, updating a third table with about 20,000 records takes literally 3 seconds, while performing the same operation but using a different field takes close to 3 minutes. Obviously there is an enormous difference that at first look does not make any sense

As you start looking more deeply into the problem you notice that the query that runs very fast uses a column of the Integer data type, while the slow query uses a field set to Numeric(15,0).

Since you know that this is not the correct data type for the field you alter the table and change it to Integer. Now you run the two queries and they BOTH take 3 seconds.

Consider how this small (data type) lapse made a simple update operation run 60 times slower than it should have normally taken.

Lesson learned: “Trust but verify”. At least when performance matters, which is probably 99.99% of the time.

It sure is convenient to let SSIS create the table automatically for you. Having to manually create a table before bringing in the data could be very labor intensive, especially if the table has dozens of columns. Just go back and look at what SSIS did for you and make sure it is what you need.

Technologies used:
- Oracle 9 on Solaris 9
- SQL Server 2008 Standard on Windows Server 2008 Enterprise 64-bit
- T-SQL
- SSIS 2005
- Microsoft Office (Excel) 2010 Professional

Building a centralized backup monitoring solution

No Comments

Written by Luis Miguel – Originally published on October 19, 2010.

A complete solution would include sending data from several servers into a central server, in this example I’m going to show you how to do it for one server. Also the final presentation of the data should use some form of reporting tool that could display this data in a portal for easy consumption.

Step 1 – Create a table to store the backup information that will be captured periodically

USE utility
GO
CREATE TABLE [dba].[backups](
[backup_log_id] [int] IDENTITY(1,1) NOT NULL,
[server_name] [nvarchar](64) NULL,
[database_name] [nvarchar](128) NULL,
[start_date] [datetime] NULL,
[finish_date] [datetime] NULL,
[expiration_date] [datetime] NULL,
[backup_type] [nvarchar](64) NULL,
[backup_size] [int] NULL,
[physical_device_name] [nvarchar](128) NULL,
[backupset_name] [nvarchar](128) NULL,
[user_name] [nvarchar](64) NULL,
[backup_log_date] [datetime] NULL,
CONSTRAINT [PK_backups] PRIMARY KEY CLUSTERED(
[backup_log_id] ASC
)
WITH(
PAD_INDEX  = OFF,
STATISTICS_NORECOMPUTE  = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS  = ON,
ALLOW_PAGE_LOCKS  = ON
) ON [PRIMARY]
) ON [PRIMARY]
GO

Step 2 – Create the store procedure that will capture the data and save it to the backup information table

USE msdb
GO
CREATE PROCEDURE [dbo].[usp_backup_status]
AS
BEGIN
SET NOCOUNT ON;
DECLARE @backup_log_date DATETIME
SET @backup_log_date = (SELECT DATEADD(dd, DATEDIFF(dd, 0, GetDate()), 0))
INSERT INTO utility.dba.backups
SELECT RTRIM(CONVERT(CHAR(100), SERVERPROPERTY('Servername'))) AS Server,
a.name,
b.backup_start_date,
d.backup_finish_date,
b.expiration_date,
CASE b.type
WHEN 'D' THEN 'Database'
WHEN 'L' THEN 'Log'
WHEN 'I' THEN 'Differential database'
WHEN 'F' THEN 'File or filegroup'
WHEN 'G' THEN 'Differential file'
WHEN 'P' THEN 'Partial'
WHEN 'Q' THEN 'Differential partial'
WHEN NULL THEN 'Null'
END AS backup_type,
b.backup_size / 1024000 AS backup_size,
c.physical_device_name,
b.name AS backupset_name,
b.user_name,
@backup_log_date AS backup_log_date
FROM sys.databases AS a
LEFT OUTER JOIN
(SELECT
database_name,
MAX(backup_finish_date) AS backup_finish_date
FROM msdb.dbo.backupset
GROUP BY database_name
) AS d
ON a.name = d.database_name
LEFT OUTER JOIN msdb.dbo.backupset AS b
ON (a.name = b.database_name) AND (b.backup_finish_date = d.backup_finish_date)
LEFT OUTER JOIN msdb.dbo.backupmediafamily AS c
ON b.media_set_id = c.media_set_id
ORDER BY a.name
END

Step 3 – Create a job to run the stored procedure as frequently as needed

Step 4 – Query the “backups” table to see information about today’s backups

SELECT
[server_name],
[database_name],
[start_date],
[finish_date],
[expiration_date],
[backup_type],
[backup_size],
[physical_device_name],
[backupset_name],
[user_name],
[backup_log_date]
FROM [utility].[dba].[backups]
WHERE [backup_log_date] = (SELECT DATEADD(dd, DATEDIFF(dd, 0, GetDate()), 0))

Set the recovery model to “SIMPLE” programmatically

No Comments

Written by Luis Miguel – Originally published on September 18, 2010.

Here is a tiny stored procedure to set the recovery model to SIMPLE on every user database in your instance.

CREATE PROCEDURE [dbo].[Dba_reset_recovery_model]
AS
  DECLARE @db_name VARCHAR(128) -- Database name
  DECLARE @alterdb VARCHAR(512) -- Dynamic SQL command
  DECLARE db_cursor CURSOR FOR
    -- Get the USER DATABASES that ARE ONLINE and their recovery model IS NOT SIMPLE
    SELECT name
    FROM   sys.databases
    WHERE  database_id > 4
           AND [state] = 0
           AND [recovery_model] <> 3
  OPEN db_cursor
  FETCH NEXT FROM db_cursor INTO @db_name
  WHILE @@FETCH_STATUS = 0
    BEGIN
        SET @alterdb = ‘ALTER DATABASE ’ + @db_name + ‘ SET RECOVERY SIMPLE’
        EXEC (@alterdb)
        FETCH NEXT FROM db_cursor INTO @db_name
    END
  CLOSE db_cursor
  DEALLOCATE db_cursor 

Warnings:
1 – This script is known to run well in SQL Server 2008, I have not tested it in any other version of SQL Server.
2 – Do not use this script in a Production system without prior extensive testing in a lab environment.

Overview of the prerequisites for the installation of the SQL Server 2008 sample databases

No Comments

Written by Luis Miguel – Originally published on September 1st, 2010

As of this writing (Sept. 2010), the installation file is being hosted by Codeplex, the name of the setup file is AdventureWorks2008_SR4.exe (SR means “sample refresh”) and the link to download it is http://msftdbprodsamples.codeplex.com/releases/view/37109

Prerequisites for the installation:
1 – Full Text Search must be installed
2 – The Full-text Filter Daemon Launcher service must be running
3 – Filestream must be enabled

Follow the instructions listed below to confirm that the prerequisites have been fulfilled.

To check that Full Text Search was installed:
1 – Look for the file “ConfigurationFile.ini” somewhere below the path “C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log”. There should be one or more folders with names that look like timestamps, i.e. 20081122_105904, start looking in the oldest folder
2 -Once you locate the file ConfigruationFile.ini, search for a string like this “FEATURES=SQLENGINE,REPLICATION,FULLTEXT,AS,RS,BIDS,CONN,IS,BC,BOL,SSMS,ADV_SSMS”. As you can see, Full Text is one of the features installed on this server

To check if the Full-text Filter Daemon Launcher service running
1 – Open Start – Programs – Microsoft SQL Server 2008 – Configuration Tools – Configuration Manager
2 – Select SQL Server Services from the left pane
3 – Look for an the Full-text Filter Daemon Launcher entry and make sure it says Running under the State column heading, like the screenshot below

To verify if Filestream is enabled
1 – Open Configuration Manager as explained above
2 – Select SQL Server Services in the left pane
3 – Right click on “SQL Server (MSSQLSERVER)”, select Properties and click on the FILESTREAM tab.

After you confirm that the server fulfills all the prerequisites all you need to do is run the installation program and follow the wizard.

If everything goes well and depending on your selections from the wizard, once the installation program has finished you should see some or all of these databases in your server

SQL Server 2008 Sample Databases

The purpose if the post is to help you quickly find out if the server is ready to run the setup program that installs the SQL Server 2008 sample databases. For the sake of brevity I cannot get into the details of how to resolve any issues if one or more of the requirements are not ready.

*This post applies to SQL Server 2008 RTM (release to market), there is already a newer version of SQL Server called SQL Server 2008 R2 and the instructions to install the sample databases for this new version might be different.

Newer Entries

Powered by Netfirms