Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘Microsoft SQL Server’

How to Increase SQL Server query performance

Posted by Alin D on December 21, 2010

Thanks to the natural language roots of the SQL language, writing queries has become extremely easy for just about anyone to pick up. But its simplicity also makes it easy to write poorly performing queries. Here are some simple changes you can make to improve not only query performance, but, in some cases, overall SQL Server system performance as well.

CREATE TABLE vs. SELECT INTO

Oftentimes, within stored procedures or other SQL scripts, temp tables must be created and loaded with data. When writing these queries, many SQL Server DBAs and developers like to use the SELECT INTO method, like this:

SELECT *
INTO #TempTable
FROM sysobjects

While this technique works fine for small tables, when dealing with large record sets or long-running queries, it creates locks on the system objects within the tempdb database. As a result, other queries and procedures that need to create objects within the tempdb database will have to wait for the long-running query to complete. This is because when an object is created, an exclusive lock is taken against the sysobjects, syscolumns, sysindexes, etc system tables (SQL Server 2000) or the sysallocunits, syscolpars, syshobtcolumns, sysschobjs, sysserefs, etc system tables (SQL Server 2005). You can see this easily by opening two query windows and running the following:

(First window)

begin tran
create table #test1 (c1 int)

(Second window SQL 2005)

select object_name(rsc_objid), *
from sys.syslockinfo
where req_spid = 52 /*Where 52 = the SPID of the first window*/
order by 1

(Second window SQL Server 2000)

sp_lock 52 /*Where 52 = the SPID of the first window*/

When you have a very long-running query in a temporary table using the SELECT INTO format, those same system table locks are held until the query completes and data loads into the temp table. You can avoid system table locking by manually creating the table with the CREATE TABLE command – before loading the data into the table.

For example, this code …

CREATE TABLE #TempTable
(spid int)

INSERT INTO #TempTable
SELECT spid
FROM sys.objects

… will require much less locking than this code:

SELECT spid
INTO #TempTable
FROM sys.objects

While the total number of locks taken is the same, the length of time the locks are held for the first query will be much shorter. This allows other processes to create temp tables.

Typically, when developing SQL code the development server has only a single user or few users. When working on SQL code, it’s important to know when the code will impact sessions other than the current session. And unexpected interaction can cause major performance issues.

Accessing data across linked servers

Linked servers are an excellent way to get data in real time from one server to another. However, incorrectly written linked server queries can quickly decrease system performance on one or both servers. While it’s easy to write these queries across linked servers, the query optimizer doesn’t always work as you would expect. I often see queries that join a local table to two remote tables and the queries take hours to run. That’s because the local optimizer doesn’t know which records to request from the remote table.

It therefore requests that the remote server transmit the entire table, and all that data is then loaded into a temporary table and the join is done locally. Unfortunately, because the local table is a temporary table — and not a physical table on the source system — the indexes

on the remote table do not get created on the temporary table. Because of the lack of indexes, expected query execution time skyrockets.

There are a couple of techniques you can use to improve query response time. The first is to create a stored procedure on the remote database and have it return a record set, being a subset of the remote tables, which is then loaded into a local temporary table. It can then be indexed as needed. The trick with this method is to provide an input variable to the remote procedure where input values can be passed to. Thus, you will reduce the number of returned records by as much as possible. Fewer records will reduce the run time of that stored procedure as well as the network latency on transferring those records from the remote system to the local system.

The second technique you can use is a variation of the first method. You create local temporary tables for each of the remote tables and transfer over the columns and records needed from each of the remote tables. Next, index the tables as needed and join the temp tables locally.

While the second technique is easier and faster to set up and implement, the first method gives you a greater performance savings, as typically less data needs to be transferred between servers.

Subqueries as join partners

When working with joins, you may want to manually control the order that data is selected. An easy (and usually safe) way to do this is to use subqueries as the join object instead of joining directly to a table.

In some instances, you can decrease your query execution time by forcing the SQL Server to prefilter data in the table. This method is not foolproof and if used incorrectly it can increase the execution time of your query. The method should be fully tested before moving it to your production environment.
As we have seen, there are some quick and easy methods for improving query performance for some long-running processes. While these techniques will not apply to every issue you run across, they will help in some instances.

In some instances, you can decrease your query execution time by forcing the SQL Server to prefilter data in the table. This method is not foolproof and if used incorrectly it can increase the execution time of your query. The method should be fully tested before moving it to your production environment.
As we have seen, there are some quick and easy methods for improving query performance for some long-running processes. While these techniques will not apply to every issue you run across, they will help in some instances.

Posted in SQL | Tagged: , , , | Leave a Comment »

SSRS Reporting Services & Architecture

Posted by Alin D on December 15, 2010

SQL Server Reporting Services [SSRS] is Reporting system based on SQL Server. It provides a set of tools & services enabling us to create, manage, and deliver reports for entire organization. It is a Microsoft Product released in year 2000.

SSRS Architecture intentionally not given in the starting of article. Before looking on each component of architecture, I preferred to make practical approach to it. In the end of article Architecture has been explained.

Create SSRS Report

First of all go to MS SQL Server 2005=>SQL Server Business Intelligence Development Studio.

1.gif

Move to File=>New=>Project=>Report Server Project

2.gif

After creating new report project, we get two folders in Solution:-

Share Data Sources=>here we set database credentials.

Reports=>here we add report files

3.gif

SSRS Data Source

Right click on Shared Data Sources folder & add data source. Following window panel will be opened where we need to provide data server details & database name.

To confirm that defined database is successfully connected click on ‘Test Connection’.

4.gif

5.gif

After successfully adding data source, we can add a report in Report folder. Right click on Report folder & Add-New-Report. Window panel will be opened. Select Report item & provide report name ‘Header_Report’.

6.gif

SSRS Report Design

After adding new report, we see report has 3 sections in different Tabs:

  1. Data-Here we put SQL Query or Procedure to fetch data from database that we have to show on report.
  2. Layout-This is the designing section where we format report by dragging tables, rectangle, lines etc from Toolbox. And Data field on report from Dataset panel.
  3. Preview-This panel shows how the report will display to end user.

All these 3 section circled in below image.

7.gif

SSRS Toolbar

Before moving to design report just we can briefly go through Report Items available in Toolbox.

  1. Textbox: To add any custom text on report we use textbox.
  2. Line: Drawing line on report.
  3. Table: Creating a table having rows & columns, header & footer. We can format table according to our requirement.
  4. Image: Adding image to report.
  5. Chart: facilitate to add different type of charts to report.
  6. Subreport: We can add a report in another one report. Like having Header & Footer report on a report.

Toolbox items shown in below image:-

9.gif

Now I dragged 3 textboxes on report & put text ‘Dhania Hospital’, & ‘Health is Wealth’, & ‘Bhiwani Haryana 127021’ respectively.

10.gif

After previewing report in Preview tab, report will appear as shown below.

11.gif

After finishing Header_Report, now we are creating new report AdmittedPatientList.

12.gif

After adding new report, move to Data section of report. Select <New Data Set>. A new window Dataset will be opened. Here choose Command type [store proc or text query] we need to use to fetch data from database. Here in this report stored procedure USP_GET_INPATIENT_REPORT.

13.gif

After adding dataset, click on Run button to execute command to get data. Define Query Parameters window opened where we need to pass values to procedure parameters[@FROM_DATE,@TO_DATE etc.]

14.gif

After execution of command, data shown in below panel & Report Datasets occupied with data fields we have in USP_GET_INPATIENT_REPORT procedure.

15.gif

After adding dataset to report, now we move to design report. Move to Layout section of report & drag a table on it. As we drag a table we get 3 sections in it:-

  1. Header: here we put data that need to be shown header of report. We can have more than one row in header just by clicking on Header row & add new row.
  2. Details: this is the part of table where we drag data fields from dataset panel.
  3. Footer: here we add items we need to show in footer of report.

16.gif

SSRS Subreport

In this report I am adding subreport item to add Header in report. Sub report Header_Report that we created previously chosen in Subreport property.

17.gif

Now we are adding 2 more rows in header section of report by just right clicking on left most of header column.

18.gif

Report designer provide Expression Window to help developer to use different formula, functions, operations. We can directly drag any data field like patient name, address etc from Dataset window or just right click on any row cell & select EXPRESSION option.

EXPRESSION Window has been shown in below image

19.gif

By default table have 3 columns only so we can add more columns as requirements by right click to header of a column & option to add column in left or right of selected column.

We can merge no of columns to accommodate more space required for a field. For example in our current report we have to merge all columns in header section to put text ‘List of in patient from X Date to Y Date’.

New row added in header section to put name of column name like S.No, Patient Name etc. To format text of cell just right click on cell & select PROPERTY. Here in property window we can set font size, font type etc.

20.gif

Row Formatting: To format a row we need to open property window of a row by just selecting row & right clicking it & go to Property option.
In property window we can set border font, type, back color, text alignment, padding etc.

Now we dragging data fields like patient name, address in detail section of table just below their corresponding headers like patient name address. It is shown in below image.

21.gif

Now report is ready to use. Move to Preview panel of report & pass required parameters [FROM_DATE, TO_DATE etc] of report.

22.gif

Publish SSRS Report on Report Server

After creation of report we need to publish it on report server so it could be available to end user.

To publish report on sever we need to set credential of report server. Go to property of Solution explorer as shown below image.

23.gif

In Property window set ‘TargetServerURL’ field with Report-Server [i.e. NSARORA] and report virtual folder [i.e. ReportServer$NSARORA].

Field ‘TargetReportFolder’ contain folder name in report server where published reports saved. In detail it is shown in below image.

24.gif

After making setting for report server credential now we can deploy reports on server.

25.gif

As deployment of report starts output window shows the deployment.

26.gif

SSRS Report available to End User

After publishing report on Server, report is available to user. There are two ways to expose report to user:-

  1. Report Manager
  2. SSRS Report in ASP.NET Application

Report Manager

Report Manager is web interface that allow to access to reports published on Report Server. Report Manager can access in browser by entering Report Server path i.e. HTTP://NSARORA/ReportServer$NSARORA?SSRS_Demo_Project


After accessing report manager now we can navigate to AdmittedPatientList report shown in list in above image.

We need to provide parameters FROM_DATE & TO_DATE to access list of patients admitted in hospital.

The other way we can access report manager by MS SQL Server Management Studio.

Connecting to Reporting services is shown in below image

After connecting to reporting server we can have folder where we published our reports on report server.

We published our reports in ‘SSRS_Demo_Project’ folder containing AdmittedPatientList report as shown below-

To open report in Report Manager right click on report & select ‘View Report’.

To access report in Report Manager first of all it needs authentication:-

After successfully authenticated we are able to view report shown below:-

SSRS Report in ASP.NET Application:

The first way to provide SSRS Report to end user is Report Manager just studied above. But Report Manager is usually used by System Administrator. So we need to create a custom application in ASP.Net that will be available to user. Here we are creating ASP.NET Application & SSRS Report to be integrated on aspx web page.

ReportViewer Control

First of all create ASP.NET Application in VS 2005. Add ReportViewer control to aspx page. We included to textboxes passing FROM_DATE/TO_DATE parameters in report. Page design has been shown in below image.

Design code has been shown below:-

ASPX[Design page]

<form id=”form1″ runat=”server”>
<div>
<table width=”100   %” class=”lab1″ align=”center”>
<tr>
<td  align=”right” >From</td>
<td align=”left”>
&nbsp;<asp:TextBox ID=”txtFromDate” runat=”server”></asp:TextBox></td>
</tr>
<tr>
<td align=”right”>To</td>
<td align=”left”>
&nbsp;<asp:TextBox ID=”txtToDate” runat=”server”></asp:TextBox></td>
</tr>
<tr>
<td></td>
<td align=”left”>
<asp:Button ID=”btnSubmit” runat=”server” Text=”Submit” CssClass=”Button” />
<asp:Button ID=”btnReset” runat=”server” Text=”Reset” CssClass=”Button” /></td>
</tr>
<tr>
<td colspan=”3″ bordercolor=”green”>&nbsp;<rsweb:ReportViewer ID=”ReportViewer1″ BorderWidth=”10″ BorderColor=”AliceBlue”  ProcessingMode=”Remote” Visible=”true” runat=”server” Width=”688px”>
</rsweb:ReportViewer>
</td>
</tr>
</table>
</div>
</form>

Report Parameter in SSRS

On button submit, we pass Server [i.e. NSARORA], Report Server [i.e. Reportserver$nsarora] & name of the report located in publishing folder[/SSRS_Demo_Project/AdmittedPatientList].

Here SSRS_Demo_Project is folder name where we publish our reports on report server. We create object of ReportParameter Class to pass parameters to ReportViewer control.

ASPX.VB[ Code behind Page]

Partial Class _Default
Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
End Sub
Protected Sub btnSubmit_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnSubmit.Click
Try
ReportViewer1.ShowDocumentMapButton = False
ReportViewer1.DocumentMapCollapsed = True
ReportViewer1.ShowParameterPrompts = False
ReportViewer1.ShowBackButton = True
Dim s As System.UriBuilder = Nothing
s = New System.UriBuilder
s.Host = “NSARORA”
s.Path = “Reportserver$nsarora”
ReportViewer1.ServerReport.ReportServerUrl = s.Uri
ReportViewer1.ServerReport.ReportPath = “/SSRS_Demo_Project/AdmittedPatientList”
Dim PARAM1 As New Microsoft.Reporting.WebForms.ReportParameter(“IS_DISCHARGE”, “-1”)
Dim PARAM2 As New Microsoft.Reporting.WebForms.ReportParameter(“FROM_DATE”, txtFromDate.Text.Trim())
Dim PARAM3 As New Microsoft.Reporting.WebForms.ReportParameter(“TO_DATE”, txtToDate.Text.Trim())
Dim P As Microsoft.Reporting.WebForms.ReportParameter() = {PARAM1, PARAM2, PARAM3}
ReportViewer1.ServerReport.SetParameters(P)
Catch ex As Exception
End Try
End Sub
End Class

SSRS Report on ASP.NET Application

Report Builder

Up to now we studied how to develop a SSRS Report, publish it, and make it available to end user via ReportManager or a custom application that we just created in previous section.

Now Microsoft has given facility to end user to create their own reports of their choices. Here role of Report Builder involved making available environment to end users so they can create report there.

Question also arises that why we need to give option to create own reports to end user. The reason behind it that many times User needs to analyze data according to their requirements.

Report Builder is not giving full access to SQL Server database to end user. It restricts to make available only those tables required by user & other tables not shown to user.

To make limited access to Database, Report Builder use Report Data Module.

As we create a Report Model Project, we get Data Source, Data Source Views, & Report Models folders. So next we are going to use each of these folders.

  1. Data Source
  2. Data Source View
  3. Report Model

First of all we need to give details of data source from where we have to fetch data

Database server & database selected & connection tested here.

After configuration of Data source in Report Model application, we need to define Data Source View. In Data Source View we choose which table we want to expose to end user.

So right click on Data Source View folder & choose Add New Data Source View.

Data Source [dsDataSource] that we created previously chosen here

After choosing data source now we select tables to be available to end user.

In this application we just chosen one table [TBL_PATIENT_REGISTER] only

After adding Data Source View, we need to create Report Model by right clicking on Report Models folder.

In Report model we chose Data Source View.

Once we complete Report Model, all tables & their corresponding columns shown in application.

Now our Report Model application completed. So we need to deploy it on server. Make changes in property of application as shown below.

Here nsarora is our server & ReportServer$nsarora is report server.

Now go to solution explorer right click on project name & choose Deploy:-

So we successfully deployed our Report Model application.

Now its time to hand over facility to create own report to end user. So role of Report Builder involve now.

Report Builder is a tool provided to end user to create own reports.

To access Report Builder, we use following general link to access it:- http://servername/reportserver/ReportBuilder/ReportBuilder.application

Type this link in browser to access Report Builder.

Authentication details to be provided to access report server for Report Builder.

After successfully connected to report server, Report Builder will be opened. Report Builder will contain Report module we created in last section.

Different sections of Report Builder have been shown in below image:-

Now we step by step are creating report in Report Builder. Drag Patient_ID, Name, DOB, Address fields on drag and drop columns fields section.

After selecting fields on report we need to put filter criteria on behalf of which records to be shown. For making filter setting we need to use Filter Data window.

We can also put sorting criteria on report by using sorting window.

So we have created a report in Report Builder, now we need to run it. For it click on RUN REPORT button in menu items of Report Builder.

Report Server Database

Deployed Reports, data sources, Report Module etc all these SSRS Reporting objects stored in Report Server Database.

To connect to Report Data base, go to MS SQL Server Management Studio & connect to Reporting Services.

Enter authentication details to connect to Report Database.

After connected, we have Data Sources, Folders containing deployed reports, Models folders.

Below given image shows details of these folders of Report Server database.

Data Sources folder: – This folder contains dsDataSource that we created in Report module application.

Models Folder: – Having ‘SSRS Demo Report Model’ file of Report Module Application.

Report Deployed Folder: – This folder ‘SSRS_Demo_Project’ contains all reports we created here this article and deployed on server.

SSRS Architecture

SSRS Architecture includes different tools & services involved to creating, deployment of reports & making available to end user.

Block diagram of Architecture has been shown below.

Following are the components of SSRS Architecture:-

1. Reporting Services:-

It is the Execution Engine of SSRS that runs all services involved for reporting. These services includes:-

  1. Authentication Service: – To access reports, or any tools with reporting server, authentication service involved.
  2. Deployment Service: – To deploy or publish report on server, deployment services involved.
  3. Data Processing: -Data need to be shown in report processed.
  4. Rendering Service: – On request of a report, response in form of HTML stream made available.

2. Data Sources:-

SQL Server, Excel Sheet, or XML files may be the source of data for reporting.

3. Report Designer: –

This is the tool that a developer used to create reports. This tool we already used to design reports.

4. Report Server Database: –

Deployed Reports, data sources, Report Module etc all these SSRS Reporting objects stored in Report Server Database.

5. Report Builder: –

This is the tool provided to end user to develop reports themselves.

6. Report Manager: –

It is a web interface allow to access reports deployed on report server.

Above brief description has been given about SSRS Architecture. We already had gone thoroughly in each of the component of architecture.

Posted in SQL | Tagged: , , , , , , | Leave a Comment »

Data Compression in SQL Server 2008

Posted by Alin D on December 15, 2010

Data compression is a new feature introduced in SQL Server 2008. It enables the DBA’s to effectively manage the MDF files and Backup files. There are two types of compressions,

1. Row Level Compression: This type of compression will work on the row level of the data page.

  • Operations like changing the fixed length datatype to Variable length data type. For instance, Char(10) is a fixed length datatype and If we store “Venkat” as data. The space occupied by this name is 6 and remaining 4 spaces will be wasted in the legacy system. Whereas, In SQL Server 2008, it is utilised effectively. Only 6 spaces will be given to this variable.
  • Removal of Null value and zeros. These values will not be stored in the disk. Instead, they will have a reference in the CI Structure.
  • Reduce the amount of metadata used to store the row.

2. Page Level Compression: This compression will be effective in the page level.

  • This compression follows Row Level Compression. On top of that, Page level compression will work.
  • Prefix Compression – This compression will work on the column level. Repeated data will be removed and a reference will be stored in the Compression information (CI) structure which is located next to the page header.
  • Dictionary Compression – This compression will be implemented as a whole on the page. It will remove all the repeated data and a reference will be placed on the page.

How it works:

Considering, If you a user is requesting for a data. In that case, Relational Engine will take care of getting the request compile, parse and it will request the data from the Storage engine.

Data Compression

Data Compression

Now, our data is in the compressed format. Storage engine will send the compressed data to the Buffer cache which in turn will take care of sending the data to relational engine in uncompressed format. Relational engine will do the modifications on the uncompressed data and it will send the same to buffer cache. Buffer cache will take care of compressing the data and have it for future use. In turn, it will send a copy to the Storage Engine.

Advantages:

  1. More data will be stored in the Buffer cache. So, no need to go and search in the disk which inturn reduce the I/O.
  2. Disk space is highly reduced.

Disadvantages:

  1. More CPU cycles will be used to decompress the data.
  2. It will be a negative impact, if the data doesn’t have more null values, zeros and compact/exact data (Equivalent to the declared data type).

Posted in SQL | Tagged: , , , , | Leave a Comment »

Identity Property Range Checking in SQL Server

Posted by Alin D on December 15, 2010

The IDENTITY property for a column of a numerical data type is a frequently used method to achieve system-generated “uniqueness” for each row in a table. Such a column then in turn is a quite popular choice for the PRIMARY KEY constraint. Most of the times one would choose the data type int for the underlying column. However, the IDENTITY property can be defined on any integer-like data type and even on the decimal data type as long as the chosen scale is 0. By default SQL Server picks only the positive values unless you specify otherwise. So, when you opt to start with a negative seed value, this is perfectly fine for SQL Server and by doing so, you essentially double the range of possible values for most of the available data types. It may hurt one’s aesthetic experience, but if you take negative values into account, this gives you the following range of possible values:

tinyint 0 – 255
smallint -32.768 32.767
int -2.147.483.648 2.147.483.647
bigint -2^63 2^63-1

If you decide to use a decimal data type such as decimal(38, 0) this gives you a range of -10^38 to 10^38-1 possible values, which, for almost any practical purposes should be more than enough.

But what can actually happen if  you are about to exceed this range?

Let’s create a very simple test case:

CREATE TABLE dbo.id_overflow (
    col1 int IDENTITY(2147483647,1)
);
GO

The above script creates a new table dbo.id_overflow with only one column col1. This column is of type int with the IDENTITY property defined on it. The seed value is chosen to be the maximum value for the int type which is 2147483647. I just arbitrarily picked the int data type, I could have chosen any other eligible data type, the result would still be the same. So, when we now insert into this table, the very first insert statement is likely to succeed, while any subsequent one will fail with an arithmetic overflow error.

--This insert will succeed
INSERT INTO dbo.id_overflow DEFAULT VALUES;
--This insert will fail
INSERT INTO dbo.id_overflow DEFAULT VALUES;

(1 row(s) affected)
Msg 8115, Level 16, State 1, Line 2
Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.

So far, everything is as expected and when we look at the content of the table we only see the one row from the first insert.

SELECT
    *
FROM
    dbo.id_overflow;

col1
2147483647

(1 row(s) affected)

But what do you do in such a case? You can’t insert any more rows into this table. Even if there might be gaps in the sequence of the existing IDENTITY values, these gaps won’t be reused automatically. Once allocated, SQL Server doesn’t care about them and if an insert doesn’t succeed for whatever reason, this just freshly allocated value is gone.

Essentially, the only feasible solution to this problem is to choose a “bigger” data type. So, a very simplified change script to change the data type in our example to bigint would look like this:

IF OBJECT_ID('dbo.id_overflow') IS NOT NULL
    DROP TABLE dbo.id_overflow;
GO
CREATE TABLE dbo.id_overflow (
    col1 int IDENTITY(2147483647,1)
)
GO

--This insert will succeed
INSERT INTO dbo.id_overflow DEFAULT VALUES;

--Now change the data type to a bigger one.
ALTER TABLE dbo.id_overflow ALTER COLUMN col1 bigint;

--This insert will now succeed as well
INSERT INTO dbo.id_overflow DEFAULT VALUES;

SELECT
    *
FROM
    dbo.id_overflow;

If you run this batch, it will finish without an error and yield the expected resultset of 2 rows. But, as mentioned above, a change script in almost any real-world database would be much more complex. Indexes would have to be changed, referencing tables would have to be changed, code parts where the value of that column is assigned to a variable of type int, etc…

It is not hard to predict, that you’re in deep trouble when this table is one of your main tables in a database and is referenced by many other tables and/or in many places in your code.

I was  bitten by a similar scenario not that long ago. Fortunately it was “only” a lookup table with an IDENTITY column on a smallint data typed column. And I was fortunate that I could simply reseed the IDENTITY value because the last +7000 inserts failed due to a misunderstanding between the developers of the calling application and me on how a certain parameter to a procedure should be used. But it still was enough trouble for me to decide to write a small check script that is now part of my weekly scripts and that gives me all the tables having such an IDENTITY column along with the last value consumed as well as the buffer I have left before I run out of values again. Here it is:

;WITH TypeRange AS (
SELECT
    'bigint' AS [name],
    9223372036854775807 AS MaxValue,
    -9223372036854775808 AS MinValue
UNION ALL
SELECT
    'int',
    2147483647,
    -2147483648
UNION ALL
SELECT
    'smallint',
    32767,
    -32768
UNION ALL
SELECT
    'tinyint',
    255,
    0
),
IdentBuffer AS (
SELECT
    OBJECT_SCHEMA_NAME(IC.object_id) AS [schema_name],
    O.name AS table_name,
    IC.name AS column_name,
    T.name AS data_typ,
    CAST(IC.seed_value AS decimal(38, 0)) AS seed_value,
    IC.increment_value,
    CAST(IC.last_value AS decimal(38, 0)) AS last_value,
    CAST(TR.MaxValue AS decimal(38, 0)) -
        CAST(ISNULL(IC.last_value, 0) AS decimal(38, 0)) AS [buffer],
    CAST(CASE
            WHEN seed_value < 0
            THEN TR.MaxValue - TR.MinValue
            ELSE TR.maxValue
        END AS decimal(38, 0)) AS full_type_range,
    TR.MaxValue AS max_type_value
FROM
    sys.identity_columns IC
    JOIN
    sys.types T ON IC.system_type_id = T.system_type_id
    JOIN
    sys.objects O ON IC.object_id = O.object_id
    JOIN
    TypeRange TR ON T.name = TR.name
WHERE
    O.is_ms_shipped = 0)

SELECT
    IdentBuffer.[schema_name],
    IdentBuffer.table_name,
    IdentBuffer.column_name,
    IdentBuffer.data_typ,
    IdentBuffer.seed_value,
    IdentBuffer.increment_value,
    IdentBuffer.last_value,
    IdentBuffer.max_type_value,
    IdentBuffer.full_type_range,
    IdentBuffer.buffer,
    CASE
        WHEN IdentBuffer.seed_value < 0
        THEN (-1 * IdentBuffer.seed_value +
          IdentBuffer.last_value) / IdentBuffer.full_type_range
        ELSE (IdentBuffer.last_value * 1.0) / IdentBuffer.full_type_range
    END AS [identityvalue_consumption_in_percent]
FROM
    IdentBuffer
ORDER BY
    [identityvalue_consumption_in_percent] DESC;

Since SQL Server 2005 it has been really easy to get this information. As you can see from the script, I have omitted the decimal(38,0) alternative. For me, a bigint column with a negative seed value is more than I would possibly ever need. I got into the habit of running this daily to monitor how many values we have left in the buffer before it blows up again and to get a feeling for “how urgent” it is to look at the inevitable changes to the database. Possible other variations would be to send out an alert when a certain threshold is reached but that I leave up to your fantasy.

Posted in SQL | Tagged: , , , , , | Leave a Comment »

Setting up Transactional Replication in SQL Server 2008 R2.

Posted by Alin D on December 9, 2010

Replication is one of the High Availability features available in SQL Server. Transactional Replication is used when DML or DDL schema changes performed on an object of a database on one server needs to be reflected on the database residing on another server. This change happens almost in real time (i.e. within seconds). In this article, I will demonstrate the step by step approach to configuring transactional replication in SQL Server 2008 R2.

Scenario: An Address table which belongs to the Person schema in the Adventureworks Database is replicated to the Adventureworks_Replication database residing on the same server. The Adventureworks_Replication database acts as a subscriber. The subscriber is normally present on a  separate database server.

Before we start with the configuration, we need to understand three important terms:

1. Publisher
2. Subscriber
3. Distributor Database

Let’s discuss each these in detail.

Publisher:

The Publisher can be referred to as a database on which the DML or DDL schema changes are going to be performed.

Subscriber:

The Subscriberis the  database which is going to receive the DML as well as DDL schema changes which are performed on the publisher. The subscriber database normally resides on a different server in another location.

Distribution Database:

A database which contains all the Replication commands. Whenever any DML or DDL schema changes are performed on the publisher, the corresponding commands generated by  SQL Server are stored in the Distribution database. This database can reside on the same server as the publisher, but it is always recommended to keep it on a separate server for better performance. Normally, I have observed that if you keep the distributoion database on the same machine as that of the publisher database and if there are many publishers then it always has an impact on the performance of the system. This is because for each publisher, one distrib.exe file gets created.

Let us now begin with the Configuring of the Transactional Replication.

There are 3 steps involved for Configuring the Transactional Replication:

1. Configuring the Distribution Database.

2. Creating the publisher.

3. Creating the subscriber.

Configuring the Distribution Database

1. Connect to the Microsoft SQL Server 2008 R2 Management Studio.

2.  Right Click on the Replication node and Select Configure Distribution as shown in the screen capture below:

3. A new window appears on the screen as shown in the screen capture below:

4. Click  the Next> button and a new window appears on the screen as shown in the screen capture below:

5. As you can see in the above screen capture, it gives the user two choices. The first choice says that whether the server on which the Replication will be configured will be Hosting the distribution database. The second choice asks the user whether some other server will be Hosting the distribution database. The user can select any one of the either choices are per his/her requirements. I decide to use the First option, i.e. the server on which the Replication is configured will itself be holding the distribution database. Then Click on the Next> button as shown in the screen capture above.

6. A new window appears as shown in the screen capture below:

7. Select the first option, i.e. Yes, configure the SQL Server Agent service to start automatically and click on the Next> button as shown in the screen capture above.

8. A new window appears on the screen as shown in the screen capture below:

As you can see in the above screen capture, you are asked where the Snapshot folder should reside on the Server. Let us first understand what the Snapshot folder exactly is.

The Snapshot Agent prepares snapshot files containing schema and data of published tables and database objects, stores the files in the snapshot folder. This folder should never be placed on the C drive of the server i.e. the drive which is hosting the Operating System.

Create a folder on any other drive to hold the Snapshot folder and Click on the Next> button as shown in the screen capture above.

9. A new window appears as shown in the screen capture below:

As you can see in the above screen capture, it displays information such as what will be the distribution database name, the location where the data and the log file will reside. Click on the Next> button as shown in the screen capture above.

10. A new window appears as shown in the screen capture below:

11. Click on the Next> button.

12. Click on the Next> button as shown in the screen capture below:

13. Click on the Finish button as shown in the screen capture below:

14. Once done, a new database named distribution gets created. In order to confirm it just expand the System Database node and you shall be able to view the distribution database, please refer the screen capture below:

Creating the Publisher

The following steps need to be followed while creating the publisher.

1. Right Click on Local Publications and select New Publications, please refer the screen capture below:

2. Click on the Next> button as shown in the screen capture below.

3. Select the database which is going to act as a publisher. In our case, I select the AdventureWorks database. Please refer the screen capture below and Click on the Next> button.

4. Select Transactional Replication from the available publication type and Click on the Next> button as shown in the screen capture below:

5. Select the Objects that you want to publish. In this example, we will select a table named Person which we need to Replicate. Select the table as shown in the screen capture below and Click on the Next> button. One important point to note is that Only those tables can be replicated in Transaction Replication which has a Primary Key column in it.

6. Since there are no filtering conditions, Click on the Next> button as shown in the screen capture below:

7. Check the first radio button as shown in the screen capture below and Click on the Next> button.

8. Click on the Security Settings tab as shown in the screen capture below.

A new window appears as shown in the screen capture below.

Select Run under the SQL Server Agent service account as the account under which the Snapshot Agent process will run and Connect to the publisher By impersonating the process account as shown in the screen capture below and then Click on the OK button.

Click on the Next> button as shown in the screen capture below.

9. Click on the Next> button as shown in the screen capture below.

10. Give a suitable name to the publisher ad Click on the Finish button as shown in the screen capture below.

Creating the Subscriber

Once the publisher is created the next step is to create the subscriber for it.

The following steps needs to be performed for creating the subscriber.

1. Right Click on the publisher created and select New Subscriptions as shown in the screen capture below.

2. Click on the Next> button as shown in the screen capture below.

3. Click on the Next>  button as shown in the screen capture below.

4. Click on the Next> button as shown in the screen capture below.

5. As shown in the screen capture below, it asks for the Subscriber name as well as the subscription database. The subscriber database can be created by restoring the publisher database at the start itself or by creating a new database as shown in the screen capture below.

If you have already restored the backup of the database which is a publisher, then the database name will appear in the dropdown as shown in the screen capture below:

If we wan’t to now create the subscriber database then it can be done as follows:

Click on New Database as shown in the screen capture below.

A new window appears as shown below. Give a suitable database name as well as the path where the data and log file are going to reside.

Click on the OK button.

If the subscriber is some other server, then the following steps need to be performed.

Click on the down arrow available on the Add Subscriber button as shown in the screen capture below.

Click on Add SQL Server Subscriber as shown in the screen capture above.

A new window appears which asks for the SQL Server Name as well as the Authentication neeed to connect to the SQL Server, please refer the screen capture below.

6. Click on the Next> button as shown in the screen capture below.

7. Click on the button as shown in the screen capture below. Here we need to specify

Process account as well as the connection options for the distribution agent.

.  A new window appears as shown in the screen capture below.

9. Specify the distribution agent to run under the SQL Server Agent Service Account. Also connect to the distributor as well as the subscriber by impersonating the process account. Please refer the screen capture below.

10. Click on the OK button as shown in the screen capture above.

11. Click on the Next> button as shown in the screen capture below.

12. Ensure that the Agent is scheduled to Run Continuously and then click on the Next> button as shown in the screen capture below.

13. Ensure that the Subscriber is initialized immediately and then click on the Next> button as shown in the screen capture below.

14. Click on the Next> button as shown in the screen capture below.

15. Click on the Finish button as shown in the screen capture below.

16. This creates a subscriber for the corresponding publisher.

17. Expand the publisher node and you shall be able to view the subscriber as shown in the screen capture

Thus, we have successfully set the Transactional Replication in SQL Server 2008 R2.

Posted in SQL | Tagged: , , , , , | Leave a Comment »

Restoring SQL Server 2008 R2 Backup file to SQL Server 2008

Posted by Alin D on November 26, 2010

Recently i had to restore a SQL Server 2008 R2 Database to a Database in another machine and i ended up getting the message:

“The Database was backed up on a server running version 10.50.1600. That version is incompatible with this server, which is running 10.00.1600″ .

hen when exploring the cause of the reason ,i found that the database that i took the backup was from SQL Server 2008 R2 .

It was the same Backup file that was used to restore in another machine and interestingly , the other machine had SQL Server 2008 .

The Version 10.50 is SQL Server 2008 R2 whereas 10.00 is SQL Server 2008.

Also , the same SQL Server Management Studio 2008 was used to access both the server instances ( 2008 and 2008 R2 ) .

It was a bit confusing for me since i was able to access the SQL Server 2008 R2 Express from the SQL Server Management Studio 2008 , but was unable to restore it to the SQL Server 2008 Express ,but then realised that since the on-disk format is different the versions and the restoring the SQL Server 2008 database to SQL Server 2008 R2 is possible . :)

I also had another option to generate the Create SQL Scripts and execute it in SQL Server 2008 and 2005 and it worked fine too :)
To find the version of Microsoft SQL Server 2008 , connect to SQL Server 2008 by using SQL Server Management Studio and execute the query

1
SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition')

Here’s a nice reference of the list of SQL Server versions

Posted in SQL | Tagged: , , | Leave a Comment »

SQL 2000 SP4 to 2005 In-place Upgrade

Posted by Alin D on November 1, 2010

[MISSING: page.pagerestrictions.error.cantrestrict]

If upgrading SQL 200 SP4 to 2005 and you get a failure on the Analzying Upgrade of a -1, and event viewer has this:

  • Source: .NET Runtime 2.0 Error
    EVENTID: 5000
    EventType clr20r3, P1 bpacmd.exe, P2 2005.90.1399.0, P3 434f5c06, P4 bpacmdx, P5 9.0.242.0, P6 434f5c04, P7 7, P8 7a, P9 system.io.filenotfoundexception, P10 NIL.

You need to do the following on the computer you are installing to:

  • Created a new directory called BPAClient at C:Program FilesMicrosoft SQL Server90Setup BootstrapBPA
  • Copy the BPAClient.dll form the C:Program FilesMicrosoft SQL Server90Setup BootstrapBPABin directory into the new directory
  • Restart Installation

Posted in SQL | Tagged: , | Leave a Comment »

Slipstream Installations in SQL Server 2008

Posted by Alin D on October 18, 2010

With the release of SQL Server 2008 SP1, Microsoft provides the capability to create Slipstream installations of SQL Server 2008. Slipstreaming is a method of integrating a SQL Server 2008 update with the original installation media so that the original media and update are installed at the same time. This capability can be a huge timesaver over having to manually run a service pack and possible cumulative update installations after running a full SQL Server install, especially if you have to repeat the installation in multiple environments. Slipstreaming is supported in the following scenarios:

  • Installing the original media and a service pack
  • Installing the original media, a service pack, and a cumulative update to the service pack

Note Slipstreaming a cumulative update for SQL Server 2008 with the original media but without a service pack is not supported because slipstreaming wasn’t supported until SQL Server 2008 SP1 was released. Also, a Slipstream installation cannot be performed to update a SQL Server 2008 instance to SQL Server 2008 R2.

If you are doing a single install of SQL Server 2008 and at the same time want to apply SP1 and possibly a cumulative update as well, you can run the Slipstream installation by performing the following steps:

1.

If they are not installed already on the target machine, install the required prerequisites for the SQL Server 2008 Installer (.NET Framework 3.5 SP1 and Windows Installer 4.5). You can install them manually from the SQL Server install disk (the installers are located in the Drive_Letter:platformredistWindows Installer folder). Alternatively, after you extract the service pack files, run the sqlsupport.msi file from within the folder where the service pack files have been extracted. For example, if you extracted the Service pack to the C:sql2k8xp1 folder on an X86 platform, this file would be found in the C:SQL2K8SP1x86setup1033 folder.

Note To confirm whether the setup support files are installed, search for the Microsoft SQL Server 2008 Setup Support Files entry in the Programs and Features Control Panel (or the Add or Remove Programs Control Panel in operating systems prior to Windows Vista or Windows Server 2008).

Note On the IA-64 platform, the .NET Framework 3.5 is not supported. The .NET Framework 2.0 SP2 is required instead. The .NET Framework 2.0 SP2 is located in the Drive_Letter:ia64redist2.0NetFx20SP2_ia64.exe folder on the source media.

2. If not done already, download the Service Pack (PCU) package that matches your system architecture and, if desired, the cumulative update (CU) package you want to install. 3.

For each package you want to include in the Slipstream installation, extract the contents to a folder on the local drive by running a command similar to the following at the command prompt from within the folder where you downloaded the package(s):

   Name_of_the_PCU_or_CU_package.exe /x:Root_of_path_to_extract_to<PCU | CU>


4.

Now things get a bit tricky. Because Slipstream support is introduced with SP1, the setup.exe program that shipped with the original SQL Server 2008 installation media doesn’t support the /PCUSource or /CUSource options that allow you to specify the locations of the service pack and cumulative updates to be slipstreamed. Instead, you need to run the SQL Server 2008 Setup program for Service Pack 1 and specify the action as INSTALL, and the file paths for the original media, as well as service pack and cumulative update files. These are specified using the /ACTION, /MEDIASource, /PCUSource, and /CUSourceD: drive with SP1 extracted to the C:SQLServer2008SP1 folder: parameters. The following example shows how to run a slipstream install of SQL Server 2008 from the install CD in the

C:SQLServer2008SP1>setup.exe /PCUSource=C:SQLServer2008SP1 /ACTION=INSTALL

 /MEDIASOURCE=D:

This command runs the SQL Server installation in the normal GUI mode, requiring you to specify and confirm all settings. If you want, you can also choose to run the install in a limited interface or automated mode, as described previously in this chapter in the section describing how to use a configuration file. However, the first time you run a Slipstream installation, you should at least use an interface that allows you to view the Ready to Install page before running the installation so that you can verify whether the desired Slipstream installation is being performed. If the setup utility is running a Slipstream installation, it is indicated in the Action field, as shown in Figure 1.

Figure 1. Verifying a Slipstream installation on the Ready to Install page.

Posted in SQL | Tagged: , , , , , , , , | Leave a Comment »

SQL 2008 : Reporting Services Architecture

Posted by Alin D on October 11, 2010

When referring to SSRS as a platform, we are actually talking about a cohesive set of development tools, configuration tools, web services, applications, and utilities, all working together to deliver enterprise-grade reporting.In a nutshell, the platform includes the following components:

  • A single Windows service, listed in the Windows Service Control applet as SQL Server Reporting Services (InstanceName), which acts as a host for and provides centralized control of SSRS’s background processing engine, web services, and Report Manager web application. It also handles encryption and decryption of stored credentials and connection information.
  • Two databases, known as the Report Server catalogs (note that the following are their default names; you can name them whatever you want using the Reporting Services Configuration Manager, or RSCM):
    • ReportServer— Stores all reporting objects, including reports, security settings, schedules, subscriptions, snapshots, users, configuration settings, and encryption keys.
    • ReportServerTempDB— Stores ephemeral report data (sometimes called intermediate processing products), such as cached reports, session and execution data.
  • Four .NET web services, which serve as SSRS’s programmatic APIs:
    • ReportService2005.asmx— Provides methods for managing all aspects of an SSRS instance configured in native mode.
    • ReportService2006.asmx— Provides methods for managing all aspects of an SSRS instance configured in SharePoint-integrated mode.
    • ReportService2010.asmx— Subsumes functionality of ReportService2005.asmx and ReportService2006.asmx.
    • ReportExecution2005.asmx— Provides methods for custom report rendering and execution.
  • Three command-line applications, all located in %PROGRAMFILES%Microsoft SQL Server100ToolsBinn:
    • RSKeyMgmt.exe— Provides encryption management for securing database-stored Report Server content, such as credentials, connection strings, accounts, and the encryption key itself. This tool is also used to join servers in an SSRS farm configuration (via the -j option).
    • RS.exe— Enables developers to write scripts in VB .NET that leverage the web service APIs.
    • RSConfig.exe— Enables you to programmatically change SSRS configuration values in RSReportServer.config (the configuration file for the web service APIs), either on a single or multiple machines.
  • Report Manager, an administrative website that provides Web-based control over SSRS, including the ability to
    • Add or remove, organize, configure, and run all kinds of SSRS objects, including
      • Reports, report resources, data sources, shared datasets, report parts, and folders.
      • Report models and data source views (used with Report Builder).
    • Administer the SSRS security model, including
      • Users and roles.
      • Role assignments (remember to keep these simple).
    • Manage
      • Report snapshot, history, and caching configuration.
      • Schedules, subscriptions, and related settings (Note: SQL Agent must be enabled for automated report execution).
      • Report execution timeout duration.
  • Reporting Services Configuration Manager (RSCM), a configuration GUI application (covered in detail in the following section).
  • A suite of SharePoint Web parts, pages, and documentation.
  • Report Builder, a ClickOnce application for designing and executing ad hoc reports.
  • BIDS, which includes Report Designer; Model Designer; specialized tool windows; and other capabilities for report development, testing, and deployment.
  • Two Microsoft .NET Report Viewer controls (one for ASP.NET, one for Windows Forms), for integrating reporting in custom applications. Report Viewer offers a rich programming interface for controlling report execution and interactivity and is available for C#, VB .NET, and the other .NET languages.
    • The Report Viewer control is capable of processing SSRS reports using two modes:
      • Local— Using this mode, report processing happens in your application, meaning that SSRS is not required to run your application’s reports.
      • Remote— Using this mode, report processing happens via the Report Server web services.
    • A Windows Management Instrumentation (WMI) provider, which exposes a set of WMI interfaces that programmers can use to configure the Report Server or build other configuration utilities.

Figure 1 provides a tiered view of the SSRS architecture, illustrating each platform component.

Figure 1. SSRS Tiered Architecture Diagram.

Posted in SQL | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Pass Mcse 70-290 Exam Easily

Posted by Alin D on September 5, 2010

Pass Mcse 70-290 Exam Easily

MCSE 2003 70-290 Certification

Get Certified in Days

According to our survey, over 85% of the candidates acknowledge that they have spent needless time and money before finding the most suitable solution to pass the 70-290 exams. It doesn’t matter if you are just starting out and looking for the most suitable way to get certified, or a skilled technician looking for the most efficient way to get certified, we have the right solution for you.

We provide the following to help you get certified in the most convenient way

24/7, around the clock, consulting service that will assist you, guide you and help you, until you get certified. This price also includes; exam vouchers and all other related expenses. There is no further cost to attain your certification.

Our Guarantee

We will refund any payment that you make, should you for any reason fail to get certified. The refund is an unconditional total refund of any moneys paid.

Why MCSE 2003

MCSE 2003 70-290 Certifications are among the most specialized certifications available today. The MCSE 2003 70-290 Certification give you industry recognition for your expertise for business solutions based on the Microsoft Windows? 2003 platform and Microsoft 2003 server software. Implementation responsibilities include installing, configuring, and troubleshooting network systems. The MCSE 2003 credential is one of the most widely recognized technical certifications in the industry, a credential in high demand. By earning the premier MCSE credential, individuals are demonstrating that they have the skills necessary to lead organizations in the successful design, implementation, and administration of the most advanced Microsoft Windows platform and Microsoft server products.

MCSE 2003 Certification Requirement:

1. Core exams (six exams required)

• Four networking system exams: (four exams required)

Exam 70-290: Managing and Maintaining a Windows Server 2003 Environment.

Exam 70-291: Implementing, Managing, and Maintaining a Microsoft Windows Server 2003 Network Infrastructure.

Exam 70-293: Planning and Maintaining a Windows Server 2003 Network Infrastructure.

Exam 70-294: Planning, Implementing, and Maintaining a Windows Server 2003 Active Directory Infrastructure.

• One client operating system exam: (one exam required)

Exam 70-620: TS: Microsoft Windows Vista, Configuring.

Exam 70-270: Installing, Configuring, and Administering Microsoft Windows XP Professional.

Exam 70-210: Installing, Configuring, and Administering Microsoft Windows 2000 Professional.

• One design exam:

Exam 70-297: Designing a Windows Server 2003 Active Directory and Network Infrastructure.

Exam 70-298: Designing Security for a Windows Server 2003 Network.

2. Elective exams (one exam required)

Exam 70-089: Designing, Implementing, and Managing a Microsoft Systems Management Server 2003 Infrastructure.

Exam 70-227: Installing, Configuring, and Administering Microsoft Internet Security and Acceleration (ISA) Server 2000, Enterprise Edition.

Exam 70-228: Installing, Configuring, and Administering Microsoft SQL Server 2000 Enterprise Edition.

Exam 70-229: Designing and Implementing Databases with Microsoft SQL Server 2000 Enterprise Edition.

Exam 70-235: TS: Developing Business Process and Integration Solutions Using BizTalk Server.

Exam 70-236: TS: Microsoft Exchange Server 2007, Configuring.

Exam 70-262: TS: Microsoft Office Live Communications Server 2005 – Implementing, Managing, and Troubleshooting.

Exam 70-281: Planning, Deploying, and Managing an Enterprise Project Management Solution.

Exam 70-282: Designing, Deploying, and Managing a Network Solution for a Small- and Medium-Sized Business.

Exam 70-284: Implementing and Managing Microsoft Exchange Server 2003.

Exam 70-285: Designing a Microsoft Exchange Server 2003 Organization.

Exam 70-297: Designing a Microsoft Windows Server 2003 Active Directory and Network Infrastructure.

Exam 70-298: Designing Security for a Microsoft Windows Server 2003 Network.

Exam 70-299: Implementing and Administering Security in a Microsoft Windows Server 2003 Network.

Exam 70–301: Managing, Organizing, and Delivering IT Projects by Using Microsoft Solutions Framework 3.0.

Exam 70–350: Implementing Microsoft Internet Security and Acceleration (ISA) Server 2004.

Exam 70–431: TS: Microsoft SQL Server 2005 – Implementation and Maintenance.

Exam 70-445: Microsoft SQL Server 2005 Business Intelligence – Implementation and Maintenance.

Exam 70-500: TS: Microsoft Windows Mobile Designing, Implementing, and Managing.

Exam 70-501: TS: Microsoft Windows Server 2003 Hosted Environments, Configuring, and Managing.

Exam 70-620: TS: Microsoft Windows Vista, Configuring.

Exam 70-624: TS: Deploying and Maintaining Windows Vista Client and 2007 Microsoft Office System Desktops.

Exam 70-630: TS: Microsoft Office SharePoint Server 2007, Configuring.

Exam 70-631: TS: Configuring Microsoft Windows SharePoint Services 3.0.

With rich experience in writing, often in the major websites, newspapers published articles and welcomed by a large number of readers,and articles written by others with a large number of quote.

Posted in Windows 2003 | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »