Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘Business Intelligence’

SQL Server Analysis Services partitions – Create and Manage

Posted by Alin D on May 12, 2011

Partitions are portions of a SQL Server Analysis Services measure group that hold some or all of the measure group’s data.

When a measure group is first created, it contains a single partition corresponding to all the data in your fact table or view. Additional partitions need to be created for any measure group with more than 20 million rows.

Since a majority of corporate databases have far more than 20 million rows in fact tables, you should know how to create partitions and also be aware of good partition design practices.

ou can define partitions using the Business Intelligence Development Studio (BIDS). On the partitions’ tab within your project, simply click the New Partition link near a measure group to open the Partition Wizard. (I won’t cover the steps of the Partition Wizard here because it is fairly simple to follow).

An alternative method to creating new partitions is through XMLA scripts, which is what BIDS does behind the scenes.

You can script an existing partition in SQL Server Management Studio (SSMS) by right-clicking a partition and then choosing Script Partition as CREATE to open a new query window. You will need to edit certain properties such as the partition identifier, its name and the query used for populating the partition.

Here is a sample XMLA for a partition:

<Create xmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
<ParentObject
>
< DatabaseID>Adventure Works DW 2008</DatabaseID>
< CubeID>Adventure Works</CubeID>
<MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
</ParentObject>
<ObjectDefinition>
<Partition xmlns:xsd=http://www.w3.org/2001/XMLSchemaxmlns:xsi=http://www.w3.org/2001/XMLSchema-instancexmlns:ddl2=http://schemas.microsoft.com/analysisservices/2003/engine/2xmlns:ddl2_2=http://schemas.microsoft.com/analysisservices/2003/engine/2/2xmlns:ddl100_100=http://schemas.microsoft.com/analysisservices/2008/engine/100/100>
<ID>Internet_Sales_2001</ID>
<Name>Internet_Sales_2001</Name>
<Source xsi:type=QueryBinding>
<DataSourceID>Adventure Works DW</</#0000FF>DataSourceID>
<QueryDefinition>
SELECT
[dbo].[FactInternetSales].[ProductKey],
[dbo].[FactInternetSales].[OrderDateKey],
[dbo].[FactInternetSales].[DueDateKey],
[dbo].[FactInternetSales].[ShipDateKey],
[dbo].[FactInternetSales].[CustomerKey],
[dbo].[FactInternetSales].[PromotionKey],
[dbo].[FactInternetSales].[CurrencyKey],
[dbo].[FactInternetSales].[SalesTerritoryKey],
[dbo].[FactInternetSales].[SalesOrderNumber],
[dbo].[FactInternetSales].[SalesOrderLineNumber],
[dbo].[FactInternetSales].[RevisionNumber],
[dbo].[FactInternetSales].[OrderQuantity],
[dbo].[FactInternetSales].[UnitPrice],
[dbo].[FactInternetSales].[ExtendedAmount],
[dbo].[FactInternetSales].[UnitPriceDiscountPct],
[dbo].[FactInternetSales].[DiscountAmount],
[dbo].[FactInternetSales].[ProductStandardCost],
[dbo].[FactInternetSales].[TotalProductCost],
[dbo].[FactInternetSales].[SalesAmount],
[dbo].[FactInternetSales].[TaxAmt],
[dbo].[FactInternetSales].[Freight],
[dbo].[FactInternetSales].[CarrierTrackingNumber],
[dbo].[FactInternetSales].[CustomerPONumber]
FROM [dbo].[FactInternetSales]
WHERE OrderDateKey &lt;= ‘20011231’
</QueryDefinition>
</Source>
<StorageMode>Molap</StorageMode>
<ProcessingMode>Regular</ProcessingMode>
<ProactiveCaching>
<SilenceInterval>-PT1S</SilenceInterval>
<Latency>-PT1S</Latency>
<SilenceOverrideInterval>-PT1S</SilenceOverrideInterval>
<ForceRebuildInterval>-PT1S</ForceRebuildInterval>
<AggregationStorage>MolapOnly</AggregationStorage>
<Source xsi:type=ProactiveCachingInheritedBinding>
<NotificationTechnique>Server</NotificationTechnique>
</Source>
</ProactiveCaching>
<EstimatedRows

>1013</EstimatedRows>
<AggregationDesignID>Internet Sales 1</AggregationDesignID>
</Partition>
</ObjectDefinition>
</Create>

Note that when defining an effective partition, specifying the source of the data it will hold is perhaps the most important part.

As a rule of thumb, your partitions should contain between five and 20 million rows of fact data. Furthermore, you should avoid partitioning files greater than 500 MB in size. Partition files are in your Analysis Services installation folder under datadatabase_namecube_namemeasure_group_name.

You can also bind a partition to a table, view or a SQL query. If a relational data warehouse has multiple tables holding fact data, you should bind partitions to such tables as long as each table size is constrained as advised above. If you have a single, large fact table, you could write SQL queries for each Analysis Services partition to retrieve only part of this data.

Views provide a nice alternative for partition binding, especially when testing cubes. For example, if a fact table has millions of rows, processing is going to take a long time. For testing the solution, you don’t necessarily need to load all the data. Instead, create a view that selects only a portion of rows from the large table(s).

Later, when you’re ready to deploy your solution into production, alter your partition(s) definition so that they are bound to appropriate tales, queries or views.

How do you decide what data to include in each partition? SQL Server Analysis Services uses partitions to speed up MDX queries. Each partition contains an XML file that defines the range of dimension member identifiers in a given partition. When an MDX query is submitted, the Analysis Services engine decides what partition files to scan based on the values in each partition’s XML file.

The XML file is created when you process the partition and the file can be found in each partition folder (the file name is info.xml). Don’t try to edit this file – the dimension key references are internal values that cannot be retrieved from SQL Server Analysis Services.

If the data requested by an MDX query is spread across all partitions in your measure group then Analysis Services has no choice but to read every single partition. To see how every partition in the measure group is read, record a SQL Profiler trace when you run such a query. If the requested data is contained in a single, small partition, your query will only have to scan a single partition file.

Reading a single 500 MB file will invariably beat scanning through 200 files of the same size. However, if you have 200 partitions to read, Analysis Services could scan some of them in parallel, and the query won’t necessarily be 200 times slower without proper partitioning.

For best MDX query performance, you should tailor partition design to the pattern of common queries. Most SQL Server Analysis Services solutions start with measure groups partitioned using a date or periodicity dimension, each partition spanning one month’s or one day’s data.

This is a reasonable approach if your queries are typically focused on a given month or several months. But what if your queries examine data across all months and are specific to product categories? In that case, partitioning only by month won’t be optimal.

If you have 10 years worth of data partitioned by month — which is not unusual — each query would have to examine 120 partitions. In this case, query performance could improve if you further partition data by product category dimension.

For example, dealership sales cube users may only be interested in comparing sales across time for a particular product category – cars, trucks or motorcycles, for example. For such cubes, you could create partitions for each month and each product category.

Like any other SQL Server Analysis Services object, partitions have a multitude of properties. Perhaps one of the most frequently discussed is partition slice. This property defines a portion of the measure group data that Analysis Services expects to be exposed by the partition.

Most Analysis Services literature suggests that this property does not have to be set for partitions that use Multidimensional OLAP (MOLAP) storage. While in most situations Analysis Services is smart enough to figure out what dimension members are included in each partition by examining data IDs in info.xml files, to be safe you should always set the partition slice, regardless of the storage mode of your partition.

Partition slices are defined through MDX. This is an example of what a slice definition would look like for a 2001 partition:

<Slice>[Date].[Calendar].[Calendar Year]. &amp;[2001] </Slice>

To further partition data by product categories, a slice definition would look like this:

<Slice> ([Date].[Calendar].[Calendar Year]. &amp; [2001], [Product].[Product Categories].[Category]. &amp; [1]) </Slice>

If you don’t specify a slice for a given dimension, SQL Server Analysis Services assumes that any member of that dimension can be found in the partition.

For example, say you specify a month and product category in partition slice but do not specify the store. Queries that examine sales data by store, but do not include any filters for date or product, may have to search through every partition.

You can also customize the storage mode for each partition. MOLAP storage mode is optimal for data retrieval — but it copies your relational data. If you prefer to leave your data in the relational format without making its copy, then you can use Relational OLAP (ROLAP) mode for infrequently accessed partitions. For example, most recent partitions can utilize MOLAP storage while historical partitions can use ROLAP.

SQL Server Analysis Services has an upper limit on the number of partitions — 2^31-1 = 2,147,483,647 — but cubes that have this many partitions are rare. Don’t be afraid to create as many partitions as needed.

Occasionally, partitions may need to be merged. For example, if a majority of your queries focus on recent data and historical queries are infrequent, you may have a separate partition for each product line and for each day for 30 or 60 days.

Once the data is stale and seldom accessed, you could combine historical partitions into weekly or monthly partitions. To merge partitions using SSMS, right-click on a partition and choose the Merge Partitions option.

This is what the XMLA for merging 2001 and 2002 partitions looks like:

<MergePartitions
xmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
<Sources>
<Source>
<DatabaseID>Adventure Works DW 2008</DatabaseID>
<CubeID>Adventure Works</CubeID>
<MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
<PartitionID>Internet_Sales_2002</PartitionID>
</Source>
</Sources>
<Target>
<DatabaseID>Adventure Works DW 2008</DatabaseID>
<CubeID>Adventure Works</CubeID>
<MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
<PartitionID>Internet_Sales_2001</PartitionID>
</Target>
</MergePartitions>

Be aware that you can copy aggregation design from one partition to another, which I will discuss in more detail in a future article. For now know that if you’re happy with your current aggregation design, you can assign it to a newly created or an existing partition.

If a partition has 500,000 or more estimated rows (you can set estimated numbers of rows in BIDS) and you haven’t defined any aggregations for this partition, then BIDS 2008 warns that your application performance can be improved by adding aggregations.

Partitions reduce the time it takes to process your measure group because each partition only loads a portion of the entire fact table and view. Remember that during processing, SQL Server Analysis Services modifies the SQL query, defining the partition before sending it over to the relational data source.

For example, earlier I showed the definition for Internet Sales 2001 partition within the Internet Sales measure group of an Adventure Works 2008 database.

The query that Analysis Services sends to SQL Server while processing this partition is considerably different from the original query:

SELECT

[dbo_FactInternetSales].[dbo_FactInternetSalesSalesAmount0_0] AS[dbo_FactInternetSalesSalesAmount0_0],
[dbo_FactInternetSales].[dbo_FactInternetSalesOrderQuantity0_1] AS[dbo_FactInternetSalesOrderQuantity0_1],
[dbo_FactInternetSales].[dbo_FactInternetSalesExtendedAmount0_2] AS[dbo_FactInternetSalesExtendedAmount0_2],
[dbo_FactInternetSales].[dbo_FactInternetSalesTaxAmt0_3] AS[dbo_FactInternetSalesTaxAmt0_3],
[dbo_FactInternetSales].[dbo_FactInternetSalesFreight0_4] AS[dbo_FactInternetSalesFreight0_4],
[dbo_FactInternetSales].[dbo_FactInternetSalesUnitPrice0_5] AS[dbo_FactInternetSalesUnitPrice0_5],
[dbo_FactInternetSales].[dbo_FactInternetSalesTotalProductCost0_6] AS[dbo_FactInternetSalesTotalProductCost0_6],
[dbo_FactInternetSales].[dbo_FactInternetSalesProductStandardCost0_7] AS[dbo_FactInternetSalesProductStandardCost0_7],
[dbo_FactInternetSales].[dbo_FactInternetSales0_8] AS[dbo_FactInternetSales0_8],
[dbo_FactInternetSales].[dbo_FactInternetSalesPromotionKey0_9] AS[dbo_FactInternetSalesPromotionKey0_9],
[dbo_FactInternetSales].[dbo_FactInternetSalesSalesTerritoryKey0_10] AS[dbo_FactInternetSalesSalesTerritoryKey0_10],
[dbo_FactInternetSales].[dbo_FactInternetSalesProductKey0_11] AS[dbo_FactInternetSalesProductKey0_11],
[dbo_FactInternetSales].[dbo_FactInternetSalesCustomerKey0_12] AS[dbo_FactInternetSalesCustomerKey0_12],
[dbo_FactInternetSales].[dbo_FactInternetSalesCurrencyKey0_13] AS[dbo_FactInternetSalesCurrencyKey0_13],
[dbo_FactInternetSales].[dbo_FactInternetSalesOrderDateKey0_14] AS[dbo_FactInternetSalesOrderDateKey0_14],
[dbo_FactInternetSales].[dbo_FactInternetSalesShipDateKey0_15] AS[dbo_FactInternetSalesShipDateKey0_15],
[dbo_FactInternetSales].[dbo_FactInternetSalesDueDateKey0_16] AS[dbo_FactInternetSalesDueDateKey0_16]
FROM
(
SELECT
[SalesAmount] AS [dbo_FactInternetSalesSalesAmount0_0],
[OrderQuantity] AS [dbo_FactInternetSalesOrderQuantity0_1],
[ExtendedAmount] AS [dbo_FactInternetSalesExtendedAmount0_2],
[TaxAmt] AS [dbo_FactInternetSalesTaxAmt0_3],
[Freight] AS [dbo_FactInternetSalesFreight0_4],
[UnitPrice] AS [dbo_FactInternetSalesUnitPrice0_5],
[TotalProductCost] AS [dbo_FactInternetSalesTotalProductCost0_6],
[ProductStandardCost] AS[dbo_FactInternetSalesProductStandardCost0_7],
1     AS [dbo_FactInternetSales0_8],
[PromotionKey] AS [dbo_FactInternetSalesPromotionKey0_9],
[SalesTerritoryKey] AS [dbo_FactInternetSalesSalesTerritoryKey0_10],
[ProductKey] AS [dbo_FactInternetSalesProductKey0_11],
[CustomerKey] AS [dbo_FactInternetSalesCustomerKey0_12],
[CurrencyKey] AS [dbo_FactInternetSalesCurrencyKey0_13],
[OrderDateKey] AS [dbo_FactInternetSalesOrderDateKey0_14],
[ShipDateKey] AS [dbo_FactInternetSalesShipDateKey0_15],
[DueDateKey] AS [dbo_FactInternetSalesDueDateKey0_16]
FROM
(
SELECT
[dbo].[FactInternetSales].[ProductKey],
[dbo].[FactInternetSales].[OrderDateKey],
[dbo].[FactInternetSales].[DueDateKey],
[dbo].[FactInternetSales].[ShipDateKey],
[dbo].[FactInternetSales].[CustomerKey],
[dbo].[FactInternetSales].[PromotionKey],
[dbo].[FactInternetSales].[CurrencyKey],
[dbo].[FactInternetSales].[SalesTerritoryKey],
[dbo].[FactInternetSales].[SalesOrderNumber],
[dbo].[FactInternetSales].[SalesOrderLineNumber],
[dbo].[FactInternetSales].[RevisionNumber],
[dbo].[FactInternetSales].[OrderQuantity],
[dbo].[FactInternetSales].[UnitPrice],
[dbo].[FactInternetSales].[ExtendedAmount],
[dbo].[FactInternetSales].[UnitPriceDiscountPct],
[dbo].[FactInternetSales].[DiscountAmount],
[dbo].[FactInternetSales].[ProductStandardCost],
[dbo].[FactInternetSales].[TotalProductCost],
[dbo].[FactInternetSales].[SalesAmount],
[dbo].[FactInternetSales].[TaxAmt],
[dbo].[FactInternetSales].[Freight],
[dbo].[FactInternetSales].[CarrierTrackingNumber],
[dbo].[FactInternetSales].[CustomerPONumber]
FROM [dbo].[FactInternetSales]
WHERE OrderDateKey <= ‘20011231’
) AS [FactInternetSales]
)
AS [dbo_FactInternetSales]

Why should you care what query is sent to SQL Server (or another RDBMS) during partition processing? Because any kind of query hints or SET options that may be valid in a regular SQL statement, might not be supported for partition query definition.

For example, BIDS would allow us to append the SET NOCOUNT ON statement to the beginning of the partition query. If we add this option, however, SQL Server Analysis Services will report a syntax error and fail processing.

You can customize the partition’s processing mode, which defines whether aggregations are calculated during partition processing or by a lazy aggregation thread after processing is complete.

Lastly, you could use storage location property to store data in default or alternate location. This property may come in handy if the disk where partition data is normally stored reaches its storage capacity.

Posted in SQL | Tagged: , , , , , , , , , , , , | Leave a Comment »

Natural Key Verses Surrogate Key in SQL Server

Posted by Alin D on February 1, 2011

When designing a database to support applications you need to consider how you are going to handle primary keys. This article explores natural and surrogate keys, and discusses the pros and cons of each, allowing you to determine what makes the best sense in your environment when you are designing your databases.

When designing a database to support applications you need to consider how you are going to handle primary keys.   There are two schools of thought, or maybe three.  There are those that say primary keys should always be a made up key, or what is commonly called a surrogate key.  Others say there are good reasons to use real data as a key value; this type of key is known as natural key.  The third group is those that design their databases so their primary keys are a combination of natural and surrogate keys.  In this article I’m going explore natural and surrogate key, and discuss the pros and cons of each.  This will allow you to determine what makes best sense in your environment when you are designing your databases.

When you design tables with SQL Server, a table typically has a column or a number of columns that are known as the primary key. The primary key is a unique value that identifies each record.  Sometimes the primary key is made up of real data and these are normally referred to as natural keys, while other times the key is generated when a new record is inserted into a table.   When a primary key is generated at runtime it is called a surrogate key.   A surrogate key is typically a numeric value.  Within SQL Server, Microsoft allows you to define a column with an identity property to help generate surrogate key values. 

Before I talk about the pros and cons of natural and surrogate keys let me first expand a little more on each type of key.  By doing this you will have a better understanding of each of these two types of keys, and will have a more solid foundation to determine which type of key you should use in your database design. 

A natural key is a single column or set of columns that uniquely identifies a single record in a table, where the key columns are made up of real data.  When I say “real data” I mean data that has meaning and occurs naturally in the world of data.  A natural key is a column value that has a relationship with the rest of the column values in a given data record.   Here are some examples of natural keys values: Social Security Number, ISBN, and TaxId.

A surrogate key like a natural key is a column that uniquely identifies a single record in a table.  But this is where the similarity stops.  Surrogate keys, are similar to surrogate mothers.   They are keys that don’t have a natural relationship with the rest of the columns in a table.  The surrogate key is just a value that is generated and then stored with the rest of the columns in a record.  The key value is typically generated at run time right before the record is inserted into a table.   It is sometimes also referred to as a dumb key, because there is no meaning associated with the value.  Surrogate keys are commonly a numeric number.  

Now that you have an understanding of the difference between these two types of keys I will explore why you might use one key over the other.   In the world of data architects there is much debate over when it is appropriate to use a natural key and when a better solution would be to use a surrogate key.  As already stated there are mainly just two different camps.  Some say you should always use a natural key and the others say a surrogate key is best.  I suppose there is also a third camp that uses a combination of both natural keys and surrogate keys in their databases design.  Rather than state my opinion on which is best I’ll give you the pros and cons of uses each and then you can decide with is best for your design. 

There is a definite design and programming aspect of working with database that is built on the concept that all keys will be supported by the use surrogate keys.  To better understand these programming aspects review these pros and cons of using surrogate keys.

Pros:

The primary key has no business intelligence built into it. Meaning you cannot derive any meaning, or relationship between the surrogate key and the rest of the data columns in a row.   If your business rules change, which would require you to update your natural key this can be done easily without causing a cascade effect across all foreign key relationships.   By using a surrogate key instead of a natural key the surrogate key is used in all foreign key relationships.  Surrogate keys will not be updated over time.Surrogate keys are typically integers, which only require 4 bytes to store, so the primary key index structure will be smaller in size than their natural key counter parts.  Having a small index structure means better performance for JOIN operations.

Cons:

If foreign key tables use surrogate keys then you will be required to have a join to retrieve the real foreign key value.  Whereas if the foreign key table used a natural key then the natural key would be already be included in your table and no join would be required.  Of course this I only true if you only needed the natural key column returned in your query Surrogate keys are typically not useful when searching for data since they have no meaning.

Having natural keys as indexes on your tables mean you will have different programming considerations when building your applications.   You will find that pros and cons for natural keys to be just the opposite as the pros and cons for surrogate keys.

Pros:

Will require less joins when you only need to return the key value of a foreign key table.   This is because the natural key will already be imbedded in your table. Easier to search because natural keys have meaning and will be stored in your table.  Without the natural key in your table a search for records based on a natural key would require a join to the foreign key table to get the natural key.

Cons:

Requires much more work to change a natural key, especially when foreign relationship have been built off the natural key.  Your primary key index will be larger because natural keys are typically larger in size then surrogate keys.Since natural keys are typically larger in size then surrogate keys and are strings instead of integers joins between two tables on a natural key will take more time.

There is much debate in the world of data modeling over what kind of data should be used to support primary keys.  There are some purist that say all primary key should be surrogate keys, no matter how small the natural key, or the fact that the natural key will never be updated.  Other say you need to use natural keys because they make coding your application just so much easier.  When you design your databases you need to decide what works best in your environment.  What kind of database designer are you and what design camp do you fall into?

Posted in SQL | Tagged: , , , | Leave a Comment »

Seven things you have to know about SQL Server 2008 R2

Posted by Alin D on January 21, 2011

Some consider SQL Server 2008 R2 a “business intelligence release” because the largest, most publicized changes cater to the BI functionality of the product. There’s a lot more to R2 than that, though. Here we’ll look at some of the less popular — but no less important — changes to the latest version of SQL Server.

1. Increased SQL Server Express database size.


In the past, one of the main things that kept people from using the free version of SQL Server Express was the small database size. While SQL Server Express still has a maximum database size limit, it has been increased in SQL Server 2008 R2 by 150% (from 4 GB to 10 GB). This increase in size makes it possible for a much larger set of applications to use Express. It also allows applications that start on SQL Server Express to use the Express instance for a much longer period of time before being upgraded to one of the paid-for editions.

2. Extended Protection


SQL Server 2008 R2’s Extended Protection feature uses service and channel binding to help prevent authentication relay attacks. Service binding is used to protect against luring attacks while channel binding helps with both luring and spoofing attacks. Since service binding requires more server resources, the protection mode you use should depend in part on the CPU requirements of each.

Extended Protection is available on Windows Server 2008 and Windows 7 natively, though patches are available for download for other operating systems. The feature enhances the protection that already exists when using Windows Authentication.

3. Support for wildcard certificates with SSL encryption


In addition to supporting SSL encryption between the client and server where the SSL certificate is the same fully qualified domain name (FQDN) that the user connects to, support for wildcard certificates are now supported as well. This can help reduce SSL certificate costs and management overhead by allowing a single certificate to be used for the company website, webmail site, SQL Server and more. For instance, if a company needs to encrypt the connections to all SQL Servers, this can be done with a single SSL certificate instead of requiring one for each SQL Server.

4. Merge replication over IIS 7


When using merge replication between locations, you now have the option of using Internet Information Services (IIS) 7 as the listener for SQL Server. IIS 7 is configured on the publisher, and the subscribers connect to the IIS 7 installation and data replication. This is all protected by SSL encryption.

This great new feature allows for merge replication between two SQL Servers without exposing either to the public Internet. There is also no need for a VPN connection between the two sites. This makes it much easier for data to be replicated between two different companies without exposing either company’s network infrastructure to the other or the public Internet.

5. Collaboration and reuse within SQL Server Reporting Services (SSRS)

SQL Server Reporting Services for R2 supports several new features to enable and enhance the existing collaboration and component reuse within the SSRS environment. This includes new report parts and shared datasets. The ability to reuse these components will greatly decrease the amount of time needed to create new reports. It should also increase the uniformity of reports throughout the enterprise as they can now use the exact same module.

6. New data source types

The SQL Server 2008 R2 release of SSRS supports three new data source types — SQL Azure,Parallel Data Warehouse and a Microsoft SharePoint list. These new data sources let companies host their applications and databases in the Azure cloud while using their local SSRS infrastructure to run reports, implement report scheduling and automate file saving to network shares.

Similarly, support for the new SQL Server 2008 R2 Parallel Data Warehouse edition should allow Reporting Services to display reports from large enterprise data warehouses.

7. BIDS is backward compatible with SSRS 2008
The Business Intelligence Development Studio (BIDS) for SSRS 2008 R2 can be made backward compatible with SSRS 2008 by setting the version within the Report Server project properties. This lets you use a single development environment instead of having to maintain a separate installation, as is the case with SQL Server 2005 and 2008. This backward compatibility allows companies to more easily deploy SSRS 2008 R2 as a separate environment without having to upgrade all the existing reports at the time of deployment.

As you can see, there’s a lot more to R2 than PowerPivot and the other major features currently getting most of the marketing love. Hopefully this has opened your eyes to theother great enhancements that have been made for SQL Server 2008 R2.

Posted in SQL | Tagged: , , , , , , | 1 Comment »

SSIS Events and Handlers

Posted by Alin D on October 28, 2010

In its most basic form, creating SQL Server 2008 Integration Services packages using Business Intelligence Development Studio is a relatively straightforward process that leverages Studio’s intuitive (based on Visual Studio design guidelines) characteristics of its Designer interface. However, this simplicity is somewhat deceiving, hiding the complexity of underlying code and additional functionality associated with it. In this article, we will explore one of feature, which involves events triggered during package execution and the ability to react to their outcome through event handlers.

Launching an SSIS package initiates a series of actions associated with its executable components, namely tasks and enclosing the containers (such as Foreach Loop, For Loop, or Sequence). These actions, which represent the entire duration of the package runtime, can be grouped into two main categories – validation (responsible for evaluating whether successive tasks are expected to complete successfully given the current conditions) and execution (carrying out the actual implementation steps). Both of them are further subdivided into more granular units, depending on their relative chronological order (i.e. taking place before, during, and after validation and execution). While their primary purpose is to facilitate execution of an underlying code, they also raise events reflecting their status. Some of these events, in turn, are capable of triggering execution of custom actions defined through containers (resembling standard SSIS packages in their format) referred to as event handlers. This additional functionality can be used in a variety of ways, typically geared towards assisting with logging and troubleshooting or providing a basis for remediation of runtime errors and warning conditions.

SQL Server 2008 Integration Services offers a fairly large number of built-in event types. Some of them have a specialized purpose, pertaining to the characteristics of a task or a container they are part of. For example, OnPipelineRowsSent event is quite helpful for debugging purposes, but only when dealing with Data Flow Task tasks. Similarly, OnWMIEventWatcherEventOccurred and OnWMIEventWatcherEventTimeout are relevant exclusively in the context of WMIEventWatcher tasks. On the other hand, there are also several generic events, which are applicable to all executables (keep in mind that the term executable designates here any entity capable of running SSIS code, such as a task, a container, or the package itself). These types of events also have corresponding handlers configurable via the Business Intelligence Development Studio interface). Here is a brief listing summarizing their basic characteristics:

  • OnError – triggered by a runtime error of a current executable. The corresponding event handler is commonly utilized in order to record data documenting circumstances of the failure, but it is not intended to terminate package execution (this can be accomplished, if desired, by leveraging OnFailure precedence constraint or by forcing handler failure with FailPackageOnFailure property set to TRUE).
  • OnInformation – generated as the result of successful completion of a current stage of execution.
  • OnWarning – complements OnError and OnInformation events, covering situations when an issue surfacing during component execution does not warrant raising an error condition.
  • OnPreValidate – raised prior to the validation stage. This action might take place several times during package execution for a given component (depending on the value of DelayValidation property, which, when set to FALSE, eliminates initial validation).
  • OnPostValidate – signals completion of validation stage of an executable.
  • OnPreExecute – designates that an executable is about to be started.
  • OnPostExecute – takes place when an executable finishes running. This (in combination with OnPreExecute) comes in handy when evaluating the performance of a component.
  • OnExecStatusChanged – occurs when status (such as Abend, Completed, Executing, Suspending, or Validating) of a current executable changes.
  • OnVariableValueChanged – allows you to detect changes to SSIS variables (providing that such variables have their ChangeEvent property set to TRUE).
  • OnProgress – applicable if progress of a given executable is quantifiable (and can be especially useful for monitoring long-running packages). Additional information can be derived by evaluating system variables associated with the corresponding event handler (including ProgressComplete, ProgressCountLow, and ProgressCountHigh).
  • OnQueryCancel – fires periodically, allowing you to cancel execution of a given component (assuming such task is feasible).
  • OnTaskFailed – indicates failure of a task. It is typically accompanied by OnError event.

It is worthwhile noting that handlers are fired in a synchronous manner, which means that a thread triggering them waits for their completion before continuing its execution. In addition, they follow rules imposed by the hierarchical structure of SSIS packages. More specifically, by default, an event invoked by a given component will trigger a corresponding handler implemented not only on that component’s level, but also those defined for its parent and grandparents (for the matching event type). This allows you, on one hand, to ensure consistent behavior of all components in a package without code duplication (by creating a single event handler for the top container) and, on other, introduce exceptions by creating custom event handlers on an arbitrary level of package hierarchy. Further flexibility is made possible through manipulation of the Propagate system variable of Boolean type. Setting it to False within the scope of a container-level event handler will ensure that its parent’s handlers will not be invoked. In addition, by turning on the DisableEventHandlers property of a given executable, you have the ability to prevent its events from triggering any of its handlers (although this has no impact on event handlers defined in its parent and grandparents containers).

As mentioned earlier, event handlers are used primarily to track the progress of execution of individual components and troubleshoot any issues that might surface during package runtime. A collection of system variables helps you make information captured in this manner more meaningful. Some of them, such as SourceDescription, SourceID, SourceName, or EventHandlerStartTime are common to all event handlers, while others are available only while dealing with specific event types (such as ErrorCode or ErrorDescription, which apply specifically to OnError events or ProgressComplete, ProgressCountLow, ProgressCountHigh, and ProgressDescription pertinent to OnProgress events).

To conclude our coverage of event handlers in SQL Server 2008 Integration Services, let’s briefly examine the Business Intelligence Development Studio interface, which allows you to manage their configuration (it is also possible to accomplish the same outcome through programming methods, as described in the SQL Server 2008 R2 Books Online). Start by opening an SSIS package in the Designer window. Next, switch to its Event Handlers tab. Right underneath the top tabbed edge, you will notice two list boxes. The one of the left lists all of package SSIS containers and tasks displayed in the hierarchical format (with the package itself as the top-level node). The one to its right gives you the ability to choose an event handler to be defined for the component currently selected on the left (your options include all of the event types enumerated above). Once you make your choice, simply click on the empty area of the Designer interface and follow the same actions you used when designing SSIS packages to populate Control Flow tasks and Data Flow components (including adding Toolbox items via drag-and-drop, configuring system and user variables, and defining Connection Managers).

Posted in SQL | Tagged: , , , , , , | Leave a Comment »

Using SSIS Logging to Resolve Runtime Errors and Performance Issues

Posted by Alin D on October 28, 2010

Runtime errors and performance issues can be difficult to identify and resolve. One of the primary methods that assist with their resolution involves generating logs, which contain records of events taking place during code execution. This article provides a comprehensive overview of logging options available in SQL Server 2008 R2 Integration Services.

In general, software programming presents a unique set of challenges when it comes to troubleshooting coding mistakes. While problems surfacing during design stage and resulting from syntax or type conversion errors are relatively easy to detect and correct (this can be attributed to guidance incorporated into development tools, such as Business Intelligence Development Studio used to implement SQL Server 2008 Integration Services packages), runtime errors and performance issues are considerably more difficult to identify and resolve. One of primary methods that assist with their resolution involves generating logs, which contain records of events taking place during code execution. In this article, we will provide a comprehensive overview of logging options available in SQL Server 2008 R2 Integration Services.

An overwhelming majority of SSIS log entries represent events raised by packages and their components (you can find out more about them from our earlier article, “SSIS Events and Handlers“). Their creation is handled by log providers, which record arbitrarily chosen events in a designated target location, such as a file, a table hosted in a SQL Server database, a SQL Server Profiler trace, or the Windows Application Log. You have an option of either taking advantage of existing log providers or developing custom ones. We will focus here on the first of these two categories, which include the following provider types:

  • SSIS log provider for Text files – produces output in the comma-separated values (CSV) format, stored in an arbitrarily chosen file (via a file connection manager), which can afterwards be easily imported into Excel for further analysis.
  • SSIS log provider for XML files – yields an XML-formatted file (which implies its dependency on a file connection manager), making it suitable for a review with help of any XML-compatible viewer (and easily presentable in HTML-based reports).
  • SSIS log provider for SQL Server – records event entries in the sysssislog table of msdb database (rather than sysdtslog90 used in SQL Server 2005 – although schemas in both cases are identical) by leveraging sp_ssis_addlogentry stored procedure. In order to be able to accomplish this, the provider requires an OLE DB connection manager.
  • SSIS log provider for SQL Server Profiler – stores event data in *.trc files (which explains its reliance on a file connection manager) that employ SQL Profiler-specific format. This type of provider is intended primarily for troubleshooting performance issues, allowing you to correlate SQL Server-specific operations traditionally recorded in SQL Profiler traces with corresponding SSIS events.

SSIS log provider for Windows Event Log – dumps events in the Windows Application Log. For example, invoking package execution is recorded as Event ID 12556 and its completion is represented by the Event ID 12557 (both are easily identifiable by their SQLISPackage100 source)

It is worth noting that all providers that you assign to the package and its components share the same collection of events that are to be recorded along with specific data they should contain. As you might expect, at a minimum, you can choose from the set of events common to all SSIS components (which, as we described in more detail earlier, consist of On Error, OnInformation, OnWarning, OnPreValidate, OnPostValidate, OnPreExecute, OnPostExecute, OnExecStatusChanged, OnVariableValueChanged, OnProgress, OnQueryCancel, and OnTaskFailed). However, your selection is likely to be much wider, since it includes events specific to each component present in the package. In addition, you will also find an entry labeled Diagnostic, which is intended (based on enhancements introduced in Service Pack 2 of SQL Server 2008 Integration Services) for detailed troubleshooting of connectivity issues (it should be disabled otherwise due to the high volume of generated events, which are likely to have a negative performance impact). As indicated above, each log provider (with the exception of Windows Event Log Provider, which dumps its output directly to the Application Event Log) requires a corresponding connection manager defined as part of its configuration.

For each event type, you have the ability to specify the following individual pieces of data that should be logged.

  • Computer (identifying the name of the system on which the recorded event took place)
  • Operator (indicating the name of the user who invoked package execution)
  • SourceName (providing the name of an executable, such as a task, container, or a package where the event originated)
  • SourceID (Globally Unique Identifier – or simply GUID – matching ID property of the executable and assigned to it at its creation)
  • ExecutionID (a unique identifier assigned to each package execution instance, allowing you to differentiate between two distinct runs)
  • MessageText (providing a description associated with the event)
  • DataBytes (containing log payload revealing auxiliary data about the event).

Each executable can be configured independently in regard to events and their details that should be recorded during its runtime. Effectively, you have the ability to log a particular event that takes place on the package level, but filter it out for each of its children components (and vice versa). In addition, it is possible to enforce different levels of detail to be recorded in each of these scenarios (excluding individual pieces of data if they are not relevant in a given context).

In order to better understand dependencies between different logging settings, let’s review a sequence of steps necessary to configure them. Start by opening an SSIS package in the Designer interface of Business Intelligence Development Studio. Right-click on the empty area of its Control Flow tab and select the Logging… entry from its context sensitive menu (alternatively, you can choose the same item from the SSIS top-level menu). The resulting Configure SSIS Logs dialog box is divided in two sections. The left one, labeled Containers lists hierarchical structure of containers, with checkboxes next to each. The one on the right hand side is divided into two tabs – Providers and Logs and Details. The purpose of the first of them is to assign and configure a log provider that will be used to collect data during package execution. To accomplish this, highlight the top-level node in the Containers window (which represents the package), select one of five entries listed in the Provider type listbox (on the right) and click on the Add… command button. As the result, the selected provider will appear in the area labeled Select the logs to use for the container directly underneath. If your choice happened to be SSIS log provider for Windows Event Log, you can proceed to the next step. Otherwise, click on the listbox appearing in the Configuration column and either pick an existing connection or define a new one (via Configure OLE DB Connection Manager Editor in case of SSIS log provider for SQL Server or File Connection Manager Editor for all remaining types). Note that it is possible to create multiple log providers of the same type, which allows you to log events to multiple destinations.

With intended log providers and their corresponding connection managers in place, switch to the Details tab and select events to be logged. Turning on advanced view (by clicking on the Advanced>> command button), will additionally allow you to specify pieces of data you are interested in (such as Computer, Operator, SourceName, SourceID, ExecutionID, MessageText, and DataBytes). In essence, this gives you the ability to choose specific events along with individual data for each by marking on or off checkboxes in a table-like structure (where rows represent event types and individual pieces of data appear as columns).

Logging on the package level can be either enabled or disabled, however child containers have three possible settings. The third one, represented by the grayed checkbox, indicates that logging configuration is inherited from the parent (you also have the ability to determine the resulting value for a given container by checking whether its LoggingMode property is set to Enabled, Disabled, or UseParentSetting). This is convenient if you want to enforce consistency across all executables (since it precludes changing them in all children).

Once filtering settings are configured, you have an option to evaluate whether they yield the desired level of logging without generating (and reviewing) actual logs. Instead, prior to invoking package execution, activate the Log Events window (accessible via View->Other Windows menu) in the Designer interface of Business Intelligence Development Studio (to accomplish the same when invoking package execution via the DTExec command line utility, take advantage of its /ConsoleLog switch). Content of the window will reflect assigned settings even if log providers have been temporarily disabled.

If you want to apply the same configuration to multiple packages, use the Save… command button in the Configure SSIS Logs dialog box (while the top node is highlighted in its Containers section). This will allow you to store current logging settings in an XML formatted file, which you can subsequently apply to another package loaded into Business Intelligence Development Studio (via the Load… command button in the same dialog box).

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , | Leave a Comment »

SSIS Script Task

Posted by Alin D on October 27, 2010

SQL Server 2008 Integration Services contains an assortment of predefined Control Flow tasks designed to carry out specific types of actions. Collectively, they cover a wide range of scenarios; however, their specialized nature sometimes turns out to be overly restrictive. This article explains how to accommodate these custom requirements by employing the considerably more flexible Script Task.

SQL Server 2008 Integration Services contains a diverse assortment of predefined Control Flow tasks, which are designed to carry out specific types of actions. While collectively they cover a wide range of scenarios involving data extraction, transformation, and loading, their specialized nature sometimes turns out to be overly restrictive. In situations like these, it is usually possible to accommodate custom requirements by employing the considerably more flexible Script Task. In this article, we will cover its most relevant features (it is important to note that its purpose is quite different from the Data Flow-based Script Component, whose characteristics we are also planning to present on this forum).

It is hard to overstate the flexibility of the Script Task, considering that boundaries of its capabilities are defined primarily by your ingenuity, skills, and .NET programming model (starting with SQL Server 2008 Integration Services, it became possible to use Microsoft Visual C#, in addition to Microsoft Visual Basic available in its predecessor). The task operates essentially as a wrapper of managed code with access to SSIS-specific objects, including their methods and properties, interacting with a parent container (and other SSIS entities) through arbitrarily chosen System and User variables. Its modifications are handled using Visual Studio Tools for Applications (VSTA) replacing Visual Studio for Applications (VSA) present in earlier versions (which, incidentally, was the primary obstacle to providing support for Visual C#). The VSTA interface offers visual enhancements standard in practically every current software development environment such as color-coding or IntelliSense as well as debugging functionality including breakpoints (which integrate with breakpoint indicators of SSIS tasks and containers) or Immediate and Output windows. Furthermore, it simplifies referencing Dynamic Linked Libraries as well as making it possible to directly reference Web services and COM modules, eliminating the need for the creation of proxy classes and interop assemblies or for copying them into Global Assembly Cache and Microsoft.NETFramework folders (which used to be the case when working with VSA). The resulting code is precompiled into binary format (VSA was more flexible in this regard, giving you an option to postpone compilation until execution), effectively shortening total runtime of the package (although at the cost of its overall size).

In order to identify the most relevant characteristics of Script Task, let’s examine in more detail its interface exposed in the Business Intelligence Development Studio. Create a new project based on the Integration Services template, add the task to its Designer window (by dragging its icon from the Toolbox), and display its Editor dialog box (by selecting Edit entry from its context sensitive menu), which is divided into three sections:

  • Script section – containing the following elements:
    • ScriptLanguage textbox – designates the programming language (Microsoft Visual Basic 2008 or Microsoft Visual C# 2008) in which code contained within the task is written. Make sure to choose the intended entry before you activate the Visual Studio Tools for Applications interface (by clicking on the Edit Script… command button), since at that point, you will no longer be able to alter your selection (this action triggers auto generation of the ScriptMain class based on a built-in template using the language of your choosing).
    • EntryPoint textbox – identifies a method (which must be defined as part of the ScriptMain class in the VSTA-based project) that is invoked when the Script Task executes (set by default to Main)
    • ReadOnlyVariables and ReadWriteVariables textboxes – determines which SSIS variables will be accessible from within the script by listing their names (as comma-separated entries in the format namespace::variable name). While it is possible to type them in, the most straightforward (and the least error prone – since they are case sensitive) approach involves pointing them out directly in the Select Variables dialog box accessible via the ellipsis (...) command button located next to each textbox. Another approach to specifying SSIS variables that can be either viewed or modified within a Script Task leverages LockForRead and GetVariables methods of VariableDispenser object (we will explore it in more detail in our future articles), however it is important to realize that these methods are mutually exclusive (an attempt to reference the same variable using both will result in an error).
    • Edit Script… command button – triggers display of Microsoft Visual Studio Tools for Applications 2.0 Integrated Development Environment with the majority of its desktop estate occupied by the window containing the task code. In addition, you will also find Properties and Project Explorer windows, where the latter references the newly created Script Task via automatically generated identifier (which guarantees its uniqueness and therefore should not be changed). Effectively, content of the task constitutes a separate project, with its own set of properties (independent from characteristics of the SSIS project it is part of) accessible via its context sensitive menu and displayed in the tabbed window divided into the following sections:
      • Application – designates properties of the assembly (some of which, such as output file, name, and root namespace are derived from the auto generated identifier of the script task). In general, settings on this page are intended to create independent assemblies via a full-fledged edition of Visual Studio and therefore are not relevant in the context of our presentation (as a matter of fact, most of them are grayed out because of their read-only status), although you have the ability to customize Assembly Information (including such parameters as Title, Description, Company, Product, Copyright, Trademark, Assembly Version, File Version, GUID, or Neutral Language) as well as Make assembly COM-Visible.
      • Compile – allows you to modify the Build output path (by default pointing to bin subfolder), Compile option (such as Option explicit, Option strict, Option compare, and Option infer), a number of Warning configurations, settings such as Disable all warnings, Treat all warnings as errors, Generate XML documentation file, or Register for COM interop, as well as a number of Advanced Compile Options (for example, defining DEBUG, TRACE, or custom constants).
      • Debug – provides the ability to assign Command line arguments and Working directory for debug start options.
      • References – likely the most helpful feature available via the Properties window, considerably simplifies adding assembly references to your projects (replacing cumbersome process implemented in previous version of Visual Studio for Applications of SQL Server 2005 Integration Services) as well as identifying and removing unused ones.
      • Resources – facilitates management of project resources, such as strings or bitmap, icon, audio, and text files. This functionality is intended for development of standalone programs and is not applicable here.
      • Settings – defines project’s application settings and similar to the Resources page, contains entries pertinent to development of independent applications.
      • Signing – provides the ability to sign assemblies (which requires a strong name key file), which is not relevant in this context.

  • General section – containing the following entries:
    • Name – allows customizing the name of the Script Task (which happens to be its default) in order to improve readability of your package. Note that its value must be unique within a package.
    • Description – intended for additional information helping you make packages self-documenting.

  • Expressions section – provides the ability to assign values of Script Task properties , by dynamically evaluating associated expressions (rather than specifying their values directly).

Posted in SQL | Tagged: , , , , , , , , , , , , , | Leave a Comment »

Reporting Services 2008 R2: Geospatial Visualization – Part II

Posted by Alin D on September 11, 2010

This is the continuation of my previous article: Reporting Services 2008 R2: Geospatial Visualization – Part I. Part I discussed the  main characteristics related to the Map Report Item, Data sources for spatial data, how to create a map report using shapefiles, and finally add analytical data to it.  Our goal in the part is to create  two reports as  below:

In Part I we created the first report  please refer the that article and have the SalesByCountry report in the project, but note that you can create the second report without having the first one. The second report shows the locations of customers and the rank given to them.

The report we are going to create shows how customers are geographically spread in the country. In addition  it shows them with markers based on the rank given to them using analytical data. Note that you need AdventureWorks2008R2 database for creating this report. If you do not have it you can download it here.

1.       Open Microsoft Business Intelligence Development Studio and open the Report Project we created in the  previous article. If you need to create a fresh project, create a new project.

2.       Add a new report. Do not use the Report Wizard because it does not support creating maps, instead use Report. Name the report SalesByCustomers.rdl.

3.       Open the toolbar and double click on the Map report item. This adds a Map Report Item to the report and opens New Map Layer wizard. The first page of the wizard is for setting the source for the spatial data (please refer to Part I for information regarding sources for spatial data). We created the first report using shapefiles. Now, let’s use the Map Gallery to create the report. Select the first radio button. Under Map Gallery Tree, select USA by State Inset. Your screen would be like below:

Click Next to continue.

4.       Leave defaults in Choose spatial data and map view options and click Next.

5.       Make sure that Basic Map is selected in Choose map visualization. Note that the other two options require analytical data to be added in the wizard. Click Next.

6.     In the  Choose color theme and data visualization page, uncheck the Single color map checkbox. If required, change the Theme, or else continue with the Generic theme. Click Finish to close the wizard.

7.       As in the first report, delete Distance scale and Color scale. Delete Alaska and Hawaii from the map as well. You can delete these by selecting (just click on them) and hitting the DELETE key. Since we have not set any data to the legend, it will not appear when the report is previewed, hence leave it in the report. Below is the report when it is previewed.

8.       Now to add analytical data. Go back to the Designer surface and open the Report Data window.

9.       Click the New button in the Report Data and click the Dataset item to create a new dataset. Name the dataset SalesByCustomer and select the Use a dataset embedded in my report radio button. Click the New button next to the Data source drop-down to create a new data source.

10.   Create the connection to AdventureWorks2008R2 database and name the data source AdventureWorks2008R2. Copy and paste below query into the Query text area.

SELECT

s.BusinessEntityID

, s.Name

, a.SpatialLocation

, t1.TotalAmount

, T1.Rank

FROM

Sales.Store AS s

INNER JOIN Person.BusinessEntityAddress AS bea

ON s.BusinessEntityID = bea.BusinessEntityID

INNER JOIN Person.AddressType AS at

ON bea.AddressTypeID = at.AddressTypeID

AND at.Name = ‘Main Office’

INNER JOIN Person.Address AS a

ON bea.AddressID = a.AddressID

INNER JOIN Person.StateProvince AS sp

ON a.StateProvinceID = sp.StateProvinceID

INNER JOIN Sales.SalesTerritory AS st

ON sp.TerritoryID = st.TerritoryID

INNER JOIN Person.CountryRegion AS cr

ON st.CountryRegionCode = cr.CountryRegionCode

INNER JOIN (

SELECT

c.StoreID BusinessEntityID

,SUM(s.TotalDue) TotalAmount

,NTILE(5) OVER(ORDER BY SUM(s.TotalDue) DESC) [Rank]

FROM Sales.SalesOrderHeader s

INNER JOIN Sales.Customer c

ON c.CustomerID = s.CustomerID

WHERE c.StoreID IS NOT NULL

GROUP BY c.StoreID

HAVING SUM(s.TotalDue) > 100000

) AS t1

ON t1.BusinessEntityID = s.BusinessEntityID

WHERE

cr.Name = ‘United States’

This query returns customers and their sales amount. The rank is generated using NTILE and set 5 ranks. In order to reduce the number of records, only sales amounts that are greater than 100,000 are taken. The record set is further filtered to ‘United States’. In addition to that, locations of customers are added from Person.Address table. Note that SpatialLocation is stored as a geography type data. When the query is run, you should see a resultset like below.

Click OK to save the dataset.

11.   Double-click the Map Item to see the Map Layer pane. You can see one Polygon Layer named PolygonLayer1 is in Map Layer. In order to show locations of customers, we need geospatial data related to customers’ location. The data we need exists with the analytical dataset we just created. Now we  add it to the map. The easiest way of adding this is adding another layer with the wizard. Click on the first icon in the Map Layer which is New Layer Wizard. Note that you can do the same with the second icon which is Add Layer. The only difference between these two is, the second icon  does not start a wizard and you have to set the dataset to the layer manually. If you have clicked the first icon, the wizard has opened and, to continue, you need to provide a data source for spatial data. Select the third radio button which is SQL Server spatial query and click on Next.

12.   Since we have already added the dataset we need, select the first radio button which is Choose an existing dataset with SQL Server spatial data in this report and select the SalesByCustomer dataset. Your screen should be as below:

Click Next to continue.

13.   The window you see now is  Choose spatial data and map view options. En sure that SpatialLocation is selected in the Spatial field drop-down and that Point is selected in the Layer type drop-down. Check the Embed map data in this report checkbox and leave the Add a Bing Map layer checkbox unchecked. Click Next.

14.   The next  screen is for setting the visualization of the data. Remember, we are going to show the ranks of customer within markers, hence the best selection is the Analytical Marker Map. Note that you can also use Bubble Map .

Click Next to continue with the wizard.

15.   The next window is for selecting or adding an analytical dataset to the layer. Since the added dataset,  SalesByCustomer, contains analytical data as well, we can use this. Note that if you had selected Basic Marker Map in the previous window, you would not get this window. This window appears only for the Bubble Map and the Analytical Marker Map. If you need to load analytical data with a different dataset, you will need to make sure that the new dataset has a matching field which can be used for matching records in the spatial dataset.

Select SalesByCustomer and click Next.

16.   The next window is for setting the relationship between the spatial dataset and the analytical dataset. Use BusinessEntityID to match datasets.

Click Next to continue.

17.   Leave all checkboxes unchecked in the Choose color them and data visualization window. Select Generic for Theme. Click Finish to complete the wizard. Here is our report now.

18.   Now to add marker types to the report. Since we have 5 ranks for customers, we need 5 marker types for showing them.  On PointLayer1, click the down-arrow () and select the Marker Type Rule… item.  This opens the  Map Marker Type Rule Properties window.

19.   Select the second radio button which is Visualize data by using markers. When this  is selected, the Data filed drop-down is shown. Select #Rank as the data field. The list below the drop-down contains the markers for data, in our case, for ranks. Set the first 5 markers as Start, Diamond, Triangle, Rectangle, and Circle. Delete the other markers.

20.   Now click on the Legend tab. Remember, we have a legend in our map report, which is has not yet been used. We can use it for showing the meaning of markers. Make sure Legend1 is selected in the Show in this legend drop-down. Change the Legend text to Rank – #TOVALUE{N0}. Note that #FROMVALUE and #TOVALUE are decided by Map Marker Type Rule based on the number of distinct values in the Data field we  selected. Since we have only 5 values and we have added 5 markers  ,  the legend will be shown  as Rank – 1, Rank – 2, …. and Rank – 3.

Click OK to save the settings.  If you preview the report you should see that the customer’s locations are shown with selected markers.

21.   The next step is adding colors to markers. As a generic rule, we show the best in green and the worst in red.   Go to the designer and double-click the Map Item to see Map Layers. Click the down-arrow () on PolygonLayer1 and select Point Color Rule… item. Once the Map Color Rules Properties window is open, select the third radio button – Visualize data by using color ranges. Since we are going to apply color rules on Rank, select #Rank as Data filed. Set the start color as green, middle color as Yellow, and end color as red.

Go to the Legend and make sure that no legend is selected. If you need you can have it but having another legend for a common rule will not be useful. Click OK to save the settings.

22.   Let’s make few more modifications. Double-click the Map Title and change it to Sales by Customers – US.

23.   Right-click on the Legend and get its properties. Leave the show legend outside the viewpoint checkbox unchecked . If required, change the Position. Select Wide table from the Legend layout drop-down, to widen the legend layout. Click OK to save the settings. Now we can  change the title for the legend . Double-click on it and change it to  Customer Rank.

24.   It would be optimal to have tooltips for markers, showing the name of the customer and total amount. In order to add tooltips, click the down-arrow () on the PolygonLayer1 and select Point Properties. Set the below expression to Tooltip drop-down box.

=Fields!Name.Value & ” – ” & format(Fields!TotalAmount.Value, “C0”)

Click OK to save the settings.

25.   Now preview the report it should look as below:

26.   One more thing to be done is to complete both reports. We have to set an action in the first report to open this report. Open the first report which is SalesBy Country in the designer and double-click the Map Item. On PolygonLayer1, click the down-arrow () and select the Polygon Properties… item. Open the Action tab in the properties window and select the Go to report radio button. Open the expression window for the Specify a report drop-down and add  the below expression.

=IIF(Fields!CountryRegionCode.Value = “US”, “SalesByCustomer”, Nothing)

Note that we have enabled drill-throughs only for the US. We can enable this for other countries if we have created reports for  them too. Again, if we need to use one report for showing data for all countries, it is also possible by having one report that loads all data including spatial data at the runtime.

Click OK to save the settings.

That’s it! Both reports are now implemented. There are lots of additional properties and settings which can be set with the Map Item, explore and use them. If you have any doubt, need clarification, or suggestions, please add a comment.

Posted in SQL | Tagged: , , , , , , , | Leave a Comment »

Reporting Services 2008 R2: Geospatial Visualization – Part I

Posted by Alin D on September 11, 2010

Geospatial Visualization is one of the key new features of SQL Server 2008 R2 Reporting Services.SQL Server provided geospatial data with SQL Server 2008 and you may have noticed (or perhaps used) the two geospatial data types in SQL Server 2008 : geometry and geography. This support is further extended with Reporting Services 2008 R2 and you can now create Map-Report with the Map Report item which we will demonstrate in this article.

In this article we will examine the following:

  • Map report item and sources for spatial data
  • Map layers
  • Creating a Map Report using shapefiles
  • Adding analytical data onto Map Layer

Map Report Item and Sources for Spatial Data

The Map Report Item helps us to combine geospatial data with data to be analyzed, allowing us to set the geospatial data as the background. Spatial data can be provided to the Map Report Item in three ways:

1. Map Gallery Reports

Map Gallery reports are a set of reports installed by Reporting Services . These reports are embedded with spatial data. By default, these reports are stored in <drive>:Program FilesMicrosoft Visual Studio 9.0Common7IDEPrivateAssembliesMapGallery. These reports can be used as the source for maps and will be embedded in the report you create.

2. ESRI Shapefiles

ESRI, Environmental Systems Research Institute, Inc. shapefiles are files that have the spatial data format files such as .shp, .dbf, and .shx. Reporting Services does not require all these types of files when using them as the source for spatial data. It needs only a .shp file that contains geographical or geometrical shapes, and .dbf file that contains attributes for the shapes. If you use shapfiles for drawing the map, you need to make sure that both files are in the same folder.

3. SQL Server spatial data stored in a database

If you have stored spatial data in a geography or geometry data type column in your SQL Server database, you can use them as the source for maps.

Map Layers

Reporting Services maps are layered elements. A Map Report Item can have one or more layers. Each layer can contain spatial data used to draw the map, analytical data, and other properties such as color and size. There are mainly four types of layers:

  • Polygon – used to show geographic areas such as countries or states.
  • Line – used to show paths and routes.
  • Point – used to show specific geographic places.
  • Tile – used to display Bing maps in reports.

The initial map layer can be added to the Map Report Item through the Map Wizard. If more layers are required, you can use the New Layer Wizard.

Creating the first Report

If you look at the image above, you will see that first report titled “Sales by Country” shows a world map with several countries in color. Basically, it shows where the company has made sales. Note that this is based on the AdventureWorks2008R2 database (if required it can be downloaded here). Let’s look at a step by step implmentation.

1. Open Microsoft Business Intelligence Development Studio and create a new Report Server Project. Note that this report can also be created with Report Builder 3.0 .

2. Add a new report. Do not use Report Wizard because it does not support creating maps, instead use Report. Name the report SalesByCountry.rdl.

3. Open the toolbar and double click the Map report item. This  adds a Map Report Item to the report and opens the New Map Layer wizard. The first page of the wizard is for setting the source for spatial data. There are three options: Map gallery, ESRI shapefiles, SQL Server spatial query.

Since Map Gallery does not contain a report that has a world map, we need to use a shapfile for drawing it. The file used for the demonstration was downloaded from http://www.vdstech.com/map_data.htm (Thanks for Pinal Dave, publishing this address in his post). Go to the site and download world.zip, place the extracted files in one folder. You should see three files: world.dbf, world.shp, and world.shx.

Now select the second radio button and browse to the world.shp file you just downloaded. Once browsed and selected, your screen should be as below:

Click Next to continue.

4. Review the settings in Choose data and map view options . Click Next .

5. There are three map visualizations under Choose Map Visualization. If you select either Color Analytical Map or Bubble Map, you will need to provide a data set that contains data to be analyzed. If you select Basic Map, you can continue without providing a data set. For this demonstration will will add analytical data now even though it could be added later. Select Color Analytical Map and click Next.

6. In Choose the analytical dataset , select the second radio button Add a dataset that includes fields that relate to the spatial data that you chose earlier and click Next .

7. On Choose a connection to a data source , click  New to create a data source for analytical data.

8. Name the data source  AdventureWorks2008R2 and set the connection to the AdventureWorks2008R2 database.

Click OK and then Next to continue.

9. Design a query window is open. Paste below query and click on Next. The query returns the country code and total sales for each country.

SELECT t.CountryRegionCode, sum(h.TotalDue) TotalDue

FROM Sales.SalesOrderHeader h

INNER JOIN Sales.SalesTerritory t

ON h.TerritoryID = t.TerritoryID

GROUP BY t.CountryRegionCode

10. On the Specify the match fields for spatial and analytical data screen. This allows us to specify the relationship between the spatial data loaded from world.shp file and the data loaded from the query in the previous step.  The first grid on  the screen allows you to set links between the two data sources. The second grid shows the  columns in the spatial data set, highlighting the column we use to match. The third grid contains data from the query, again highlighting the column we use to match. Make the link as  below:

If you need to see spatial data columns and data when the report is complete, the easiest way is opening the code of the report through Solution Explorer. Right click on the report in the project and select View Code. This opens the RDL file of the  report. If spatial data is embedded in the report   there will be a  node named MapFields which contains the data

11. On the Choose color theme and data visualization screen, select [CountryRegionCode] for Field to visualize.  Click Finish to close the wizard.

12. On the report layout, we do not require Distance Scale and Color Scale. Right click the  Map area and deselect these from the menu.

13. Change the Map Title to Sales by Country. You can edit this by double clicking it. Change the legend title to Country . You might have noticed that the legend is placed at outside the map so to bring it into map area right click on the Legend and get the Legend Properties. Deselect the checkbox Show Legend outside the viewpoint which is in General section of Map Legend Properties. Preview the report and it should look as below:

14. Now the report is complete we can make some changes. First we will change the name of the Analytical Dataset. Go to the design mode and open the Report Data window. Expand Datasets and open the dataset named Dataset1. Change the name of it to  SalesByCountry.

15.  If you are in  design mode, you should see the Map Layers pane. If you do not see, double click the map item. There is only one layer named PolygonLayer1. To see the context menu of the Polygon layer, click the down arrow ().  Click on the Layer Data… to see properties of data added to the layer.

16. Note that the Data embedded in report radio button is selected in the General section. This means the spatial data  loaded from the .shp file has been embedded in the report. If you want to draw the map at  runtime, you may select other options.

17. Click on the Analytical data section. Note that the SalesByCountry dataset is selected as the analytical data set. How two datasets are linked can be seen with this section too. Click  Cancel and close the window.

18. Click the down arrow () again in the PolygonLayer1 to get to the Polygon Properties…. Expand Label text dropdown in General section and select [CountryRegionCode].

19. Click the (fx) button of Tooltip in General section and set the expression as below:

=format(Fields!TotalDue.Value, “C0”)

20.Click the down arrow () again in the PolygonLayer1 and select Polygon Color Rule… to see how colors have been applied to the map. You should see that the third radio button Visualize data using color ranges is selected. This is more appropriate if you want to compare sales in each country based on the sales value, we can apply a different color scheme if required – select the second radio button Visualize data using color palette and ensure Data field is set to [CountryRegionCode] and Palette as Random. <!–.

22. Once the report is published navigate to the the site and it should appear as below:

We have now successfully created our Map Report. The next map report which is a drill-down from this report this will be generated using the ap Gallery and and two map layers.

Posted in SQL | Tagged: , , , , , , , , | Leave a Comment »

Pass Mcse 70-290 Exam Easily

Posted by Alin D on September 5, 2010

Pass Mcse 70-290 Exam Easily

MCSE 2003 70-290 Certification

Get Certified in Days

According to our survey, over 85% of the candidates acknowledge that they have spent needless time and money before finding the most suitable solution to pass the 70-290 exams. It doesn’t matter if you are just starting out and looking for the most suitable way to get certified, or a skilled technician looking for the most efficient way to get certified, we have the right solution for you.

We provide the following to help you get certified in the most convenient way

24/7, around the clock, consulting service that will assist you, guide you and help you, until you get certified. This price also includes; exam vouchers and all other related expenses. There is no further cost to attain your certification.

Our Guarantee

We will refund any payment that you make, should you for any reason fail to get certified. The refund is an unconditional total refund of any moneys paid.

Why MCSE 2003

MCSE 2003 70-290 Certifications are among the most specialized certifications available today. The MCSE 2003 70-290 Certification give you industry recognition for your expertise for business solutions based on the Microsoft Windows? 2003 platform and Microsoft 2003 server software. Implementation responsibilities include installing, configuring, and troubleshooting network systems. The MCSE 2003 credential is one of the most widely recognized technical certifications in the industry, a credential in high demand. By earning the premier MCSE credential, individuals are demonstrating that they have the skills necessary to lead organizations in the successful design, implementation, and administration of the most advanced Microsoft Windows platform and Microsoft server products.

MCSE 2003 Certification Requirement:

1. Core exams (six exams required)

• Four networking system exams: (four exams required)

Exam 70-290: Managing and Maintaining a Windows Server 2003 Environment.

Exam 70-291: Implementing, Managing, and Maintaining a Microsoft Windows Server 2003 Network Infrastructure.

Exam 70-293: Planning and Maintaining a Windows Server 2003 Network Infrastructure.

Exam 70-294: Planning, Implementing, and Maintaining a Windows Server 2003 Active Directory Infrastructure.

• One client operating system exam: (one exam required)

Exam 70-620: TS: Microsoft Windows Vista, Configuring.

Exam 70-270: Installing, Configuring, and Administering Microsoft Windows XP Professional.

Exam 70-210: Installing, Configuring, and Administering Microsoft Windows 2000 Professional.

• One design exam:

Exam 70-297: Designing a Windows Server 2003 Active Directory and Network Infrastructure.

Exam 70-298: Designing Security for a Windows Server 2003 Network.

2. Elective exams (one exam required)

Exam 70-089: Designing, Implementing, and Managing a Microsoft Systems Management Server 2003 Infrastructure.

Exam 70-227: Installing, Configuring, and Administering Microsoft Internet Security and Acceleration (ISA) Server 2000, Enterprise Edition.

Exam 70-228: Installing, Configuring, and Administering Microsoft SQL Server 2000 Enterprise Edition.

Exam 70-229: Designing and Implementing Databases with Microsoft SQL Server 2000 Enterprise Edition.

Exam 70-235: TS: Developing Business Process and Integration Solutions Using BizTalk Server.

Exam 70-236: TS: Microsoft Exchange Server 2007, Configuring.

Exam 70-262: TS: Microsoft Office Live Communications Server 2005 – Implementing, Managing, and Troubleshooting.

Exam 70-281: Planning, Deploying, and Managing an Enterprise Project Management Solution.

Exam 70-282: Designing, Deploying, and Managing a Network Solution for a Small- and Medium-Sized Business.

Exam 70-284: Implementing and Managing Microsoft Exchange Server 2003.

Exam 70-285: Designing a Microsoft Exchange Server 2003 Organization.

Exam 70-297: Designing a Microsoft Windows Server 2003 Active Directory and Network Infrastructure.

Exam 70-298: Designing Security for a Microsoft Windows Server 2003 Network.

Exam 70-299: Implementing and Administering Security in a Microsoft Windows Server 2003 Network.

Exam 70–301: Managing, Organizing, and Delivering IT Projects by Using Microsoft Solutions Framework 3.0.

Exam 70–350: Implementing Microsoft Internet Security and Acceleration (ISA) Server 2004.

Exam 70–431: TS: Microsoft SQL Server 2005 – Implementation and Maintenance.

Exam 70-445: Microsoft SQL Server 2005 Business Intelligence – Implementation and Maintenance.

Exam 70-500: TS: Microsoft Windows Mobile Designing, Implementing, and Managing.

Exam 70-501: TS: Microsoft Windows Server 2003 Hosted Environments, Configuring, and Managing.

Exam 70-620: TS: Microsoft Windows Vista, Configuring.

Exam 70-624: TS: Deploying and Maintaining Windows Vista Client and 2007 Microsoft Office System Desktops.

Exam 70-630: TS: Microsoft Office SharePoint Server 2007, Configuring.

Exam 70-631: TS: Configuring Microsoft Windows SharePoint Services 3.0.

With rich experience in writing, often in the major websites, newspapers published articles and welcomed by a large number of readers,and articles written by others with a large number of quote.

Posted in Windows 2003 | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »

SQL Server 2008 Reporting Services

Posted by Alin D on August 13, 2010

For many years, SQL Server did not have a good answer for creating attractive reports that summarize information in ways that make sense to business users. Finally, Microsoft shipped SQL Server Reporting Services. Like Notification Services, Reporting Services was originally an add-on for SQL Server 2000, and now it’s a part of the core product. In this chapter, you’ll learn how to use Reporting Services to produce your own reports.

SSRS 2008 Tutorial: The Reporting Services Architecture

Reporting Services has a quite a few components that work together seamlessly to provide a complete reporting solution. The full Reporting Services architecture includes development tools, administration tools, and report viewers. There are a number of ways to get to Reporting Services programmatically, including URL, SOAP and WMI interfaces.

Figure 17-1 shows a simplified diagram of the main Reporting Services components that we’ll be using in this chapter.

Figure 17-1: Report Server architecture

Figure 17-1: Report Server architecture

In this chapter you’ll learn about these components:

  • Report Server is the core engine that drives Reporting Services.
  • Report Manager is a Web-based administrative interface for Reporting Services.
  • Report Designer is a developer tool for building complex reports.
  • Report Builder is a simplified end-user tool for building reports.
  • The Report Server database stores report definitions. Reports themselves can make use of data from many different data sources.

SSRS 2008 Tutorial: Using Report Designer

Reporting Services includes two tools for creating reports:

  • Report Designer can create reports of any complexity that Reporting Services supports, but requires you to understand the structure of your data and to be able to navigate the Visual Studio user interface.
  • Report Builder provides a simpler user interface for creating ad hoc reports, directed primarily at business users rather than developers. Report Builder requires a developer or administrator to set up a data model before end users can create reports.

We’ll start our tour of Reporting Services with Report Designer. Report Designer runs inside the Business Intelligence Development Studio shell, and offers several ways to create reports. You can either use the Report Wizard to quickly create a report, or you can use a set of design tools to build a report from scratch. You can also use the design tools to modify a report created with the wizard.

Using the Report Wizard

The easiest way to create a report in Report Designer is to use the Report Wizard. Like all wizards, the Report Wizard walks you through the process in step-by-step fashion. You can make the following choices in the wizard:

  • The data source to use
  • The query to use to retrieve data
  • Whether to use a tabular or matrix layout for the report
  • How to group the retrieved data
  • What visual style to use
  • Where to deploy the finished report

Try It!

To create a simple report using the Report Wizard, follow these steps:

  1. Launch Business Intelligence Development Studio.
  2. Select File > New >Project.
  3. Select the Business Intelligence Projects project type.
  4. Select the Report Server Project Wizard template.
  5. Name the new report ProductReport1 and pick a convenient location to save it in.
  6. Click OK.
  7. Read the first page of the Report Wizard and click Next.
  8. Name the new data source AdventureWorksDS.
  9. Click the Edit button.
  10. Log on to your test server.
  11. Select the AdventureWorks2008 database.
  12. Click OK.
  13. Click the Credentials button.
  14. Select Use Windows Authentication.
  15. Click OK.
  16. Check the Make This a Shared Data Source checkbox. This will make this particular data source available to other Reporting Services applications in the future.
  17. Click Next.
  18. Click the Query Builder button.
  19. If the full query designer interface does not display by default, click the query designer toolbar button at the far left end of the toolbar. Figure 17-2 shows the full query designer interface.Figure 17-2: Query Builder

    Figure 17-2: Query Builder

  20. Click the Add Table toolbar button.
  21. Select the Product table and click Add.
  22. Click Close.
  23. Check the Name, ProductNumber, Color, and ListPrice columns.
  24. Click OK.
  25. Click Next.
  26. Select the Tabular layout and click Next.
  27. Move the Color column to the Group area, and the other three columns to the Detail area, as shown in Figure 17-3.Figure17-3: Grouping columns in the report

    Figure17-3: Grouping columns in the report

  28. Click Next.
  29. Select the Stepped layout and click Next.
  30. Select the Ocean style and click Next.
  31. Accept the default deployment location and click Next.
  32. Name the report ProductReport1.
  33. Check the Preview Report checkbox.
  34. Click Finish.

Figure 17-4 shows the finished report, open in Report Designer.

Figure 17-4: Report created by the Report Wizard

Figure 17-4: Report created by the Report Wizard

Figure 17-4 shows the main features of Report Designer:

  • The Datasets window shows the data that is available to the report.
  • The main design window lets you view the report itself. You can see a preview of the report, work with the report in a layout designer, or work with the query that returns the data for the report.
  • The Solution Explorer, Output, and Properties windows are the standard Visual Studio windows.

Modifying a Report

Now that you’ve created a report with the Report Wizard, you can modify it with the Report Designer. If you’ve used any sort of visual report design tool in the past, you should have no problem making changes here. Among the possibilities here:

  • You can change the available data or the sort order for the report by modifying the query on the Data tab.
  • You can resize or rearrange controls on the Layout tab.
  • You can use the Properties window to change properties of individual controls including their font, alignment, colors, and so on.

Try It!

To modify the report that you just created, follow these steps:

  1. Click the Design tab to make the report editable.
  2. In the Report Data window, right-click on DataSet1 and select Dataset Properties.
  3. In the Dataset Properties window, click the Query Designer button.
  4. Select a Descending sort type for the ListPrice column and click OK.
  5. Click OK.
  6. Click in the textbox at the top of the report, where the report name is displayed.
  7. Click a second time in the textbox to put it in edit mode and change the value of this control to Products By Color.
  8. Click on the header for the Product column.
  9. Place the cursor between the column selectors above the Name and Product Number columns to display a double-headed arrow. Hold down the mouse button and drag the cursor to the right to widen the Name column.
  10. Place the cursor between the column selectors above the Product Number and ListPrice columns to display a double-headed arrow. Hold down the mouse button and drag the cursor to the right to widen the Product Number column.
  11. Click on the Preview tab to view the modified report, as shown in Figure 17-5.Figure 17-5: Modified product report

    Figure 17-5: Modified product report

Designing a Report From Scratch

You can also use Report Designer to build your own reports starting from scratch. In general, you’ll follow these steps to create a report:

  1. Create a Report project in Business Intelligence Design Studio or open an existing Report project.
  2. Add a report to the project.
  3. Create one or more datasets for the report.
  4. Build the report layout.

Try It!

To create a fresh report in Report Designer, follow these steps:

  1. Select File > New > Project.
  2. Select the Business Intelligence Projects project type.
  3. Select the Report Server Project template.
  4. Name the new report ProductReport2 and pick a convenient location to save it in.
  5. Right-click on the Reports node in Solution Explorer and select Add > New Item.
  6. Select the Report template.
  7. Name the new report ProductReport2.rdl and click Add.
  8. In the Report Data window, select New > Data Source.
  9. Name the new Data Source AdventureWorksDS.
  10. Select the Embedded Connection option and click on the Edit button.
  11. Connect to your test server and choose the AdventureWorks2008 database.
  12. Click OK.
  13. Click OK again to create the data source.
  14. In the Report Data window, select New > Dataset.
  15. Name the dataset dsLocation.
  16. Click the Query Designer button.
  17. If the full Query Designer does not appear, click on the Edit As Text button.
  18. Click the Add Table button.
  19. Select the Location table.
  20. Click Add.
  21. Click Close.
  22. Check the boxes for the Name and CostRate columns.
  23. Sort the dataset in ascending order by Name and click OK.
  24. Click OK again to create the dataset.
  25. Open the toolbox window (View > Toolbox).
  26. Double-click the Table control.
  27. Switch back to the Report Data window.
  28. Expand the dataset to show the column names.
  29. Drag the Name field and drop it in the first column of the table control on the design tab.
  30. Drag the CostRate field from the Report Data window and drop it in the second column of the table control.
  31. Place the cursor between the column selectors above the Name and CostRate columns to display a double-headed arrow. Hold down the mouse button and drag the cursor to the right to widen the Name column.
  32. Figure 17-6 shows the report in Design view.Figure 17-6: Designing a report from scratch

    Figure 17-6: Designing a report from scratch

  33. Select the Preview tab to see the report with data.

SSRS 2008 Tutorial: Publishing a Report

Creating reports in Business Intelligence Development Studio is good for developers, but it doesn’t help users at all. In order for the reports you build to be available to others, you must publish them to your Reporting Services server. To publish a report, you can use the Build and Deploy menu items in Business Intelligence Development Studio. Before you do this, you need to check the project’s configuration to make sure that you’ve selected an appropriate server for the deployment.

Try It!

You can publish any report, but the first report you created is probably more visually interesting at this point. To publish the first report, follow these steps:

  1. Select File > Recent Projects and choose your ProductReport1 project.
  2. Select Project „ ProductReport1 Properties.
  3. Click the Configuration Manager button.
  4. Fill in the Target Server URL for your Report Server. If you’re developing on the same computer where Reporting Services is installed, and you installed in the default configuration, this will be http://localhost/ReportServer. Figure 17-7 shows the completed Property Pages.Figure 17-7: Setting the active configuration

    Figure 17-7: Setting the active configuration

  5. Click OK.
  6. Select Build > Deploy ProductReport1. The Output Window will track the progress of BIDS in deploying your report, as shown in Figure 17-8. Depending on the speed of your computer, building the report may take some time.Figure 17-8: Setting report project properties

    Figure 17-8: Setting report project properties

  7. Launch a web browser and enter the address http://localhost/reports.
  8. It may take several minutes for the web page to display; Reporting Services goes to sleep when it hasn’t been used for a while and can take a while to spin up to speed. Figure 17-9 shows the result.Figure 17-9: Deploying a report

    Figure 17-9: Deploying a report

  9. Click the link for the ProductReport1 folder.
  10. Click the link for the ProductReport1 report.

SSRS 2008 Tutorial: Using Report Builder

Report Designer gives you one way to create reports for Reporting Services, but it’s not the only way. SQL Server 2005 also includes a tool directed at end users named Report Builder. Unlike Report Designer, which is aimed at Developers, Report Builder presents a simplified view of the report-building process and is intended for business analysts and other end users.

Building a Data Model

Report Builder doesn’t let end users explore all of a SQL Server database. Instead, it depends on a data model: a preselected group of tables and relationships that a developer has identified as suitable for end-user reporting. To build a data model, you use Business Intelligence Development Studio. Data models contain three things:

  • Data Sources connect the data model to actual data.
  • Data Source Views draw data from data sources.
  • Report Models contain entities that end users can use on reports.

Try It!

To create a data model, follow these steps:

  1. If it’s not already open, launch Business Intelligence Development Studio
  2. Select File > New > Project.
  3. Select the Business Intelligence Projects project type.
  4. Select the Report Model Project template.
  5. Name the new project AWSales and save it in a convenient location.
  6. Click OK.
  7. Right-click on Data Sources in Solution Explorer and select Add New Data Source.
  8. Read the first page of the Add New Data Source Wizard and click Next.
  9. Click New.
  10. In the Connection Manager dialog box connect to the AdventureWorks2008 database on your test server and click OK.
  11. Click Next.
  12. Name the new data source AdventureWorks and click Finish.
  13. Right-click on Data Source Views in Solution Explorer and select Add New Data Source View.
  14. Read the first page of the Add New Data Source View Wizard and click Next.
  15. Select the AdventureWorks data source and click Next.
  16. Select the Product(Production) table and click the > button to move it to the Included Objects listbox.
  17. Select the SalesOrderDetail(Sales) table and click the > button to move it to the Included Objects listbox.
  18. Click the Add Related Tables button.
  19. Click Next.
  20. Click Finish.
  21. Right-click on Report Models in Solution Explorer and select Add New Report Model.
  22. Read the first page of the Report Model Wizard and click Next.
  23. Select the Adventure Works2008 data source view and click Next.
  24. Keep the default rules selection, as shown in Figure 17-10, and click Next.Figure 17-10: Creating entities for end-user reporting

    Figure 17-10: Creating entities for end-user reporting

  25. Choose the Update Statistics option and click Next.
  26. Click Run to complete the wizard.
  27. Click Finish. If you get a warning that a file was modified outside the source editor, click Yes.
  28. Select Build > Deploy AWSales to deploy the report model to the local Reporting Services server.

Building a Report

Report Builder itself is a ClickOnce Windows Forms application. That means that it’s a Windows application that end users launch from their web browser, but it never gets installed on their computer, so they don’t need any local administrator rights on their computer to run it. To get started with Report Builder, browse to your Reporting Services home page. Typically, this will have a URL such as http://ServerName/Reports (or http://localhost/Reports if you’re running the browser on the same box with SQL Server 2008 itself). Figure 17-11 shows the Reporting Services home page.

Figure 17-11: Reporting Services home page

Figure 17-11: Reporting Services home page

To run Report Builder, click the Report Builder link in the home page menu bar. Report Builder will automatically load up all of the available report models and wait for you to choose one to build a report from.

Try It!

  1. Open a browser window and navigate to http://localhost/Reports (or to the appropriate Report Server URL if you’re not working on the report server).
  2. Click the Report Builder link.
  3. Depending on your operating system, you may have to confirm that you want to run the application.
  4. After Report Builder is loaded, select the AdventureWorks2008 report model and the table report layout. Click OK. Figure 17-12 shows the new blank report that Report Builder will create.Figure 17-12: New report in Report Builder

    Figure 18-12: New report in Report Builder

    The Explorer window to the left of the design surface shows all of the tables in the report model. Beneath that, the Fields window shows the attributes in the currently-selected entity. Note that not everything in this window is a column in the table: the report model also contains aggregate entities such as Total Safety Stock Level and automatically calculated fields.
  5. Select the Product table.
  6. Drag the Name field and drop it in the area labeled Drag and Drop Column Fields.
  7. Click on Special Offer Products in the Explorer window to show related child tables.
  8. Click on Sales Order Details.
  9. Drag the Total Order Qty field and drop it to the right of the Name field.
  10. Click where it says Click to Add Title and type Product Sales.
  11. Click the Run Report button to produce the report shown in Figure 17-13.Figure 17-13: Report in Report Builder

    Figure 17-13: Report in Report Builder

  12. Click the Sort and Group toolbar button.
  13. Select to sort by Total Order Qty descending.
  14. Click OK.
  15. Select File > Save.
  16. Name the new report Product Sales.
  17. Click Save. This will publish the report back to the Reporting Services server that you originally downloaded Report Builder from.

SSRS 2008 Tutorial: Using Report Manager

The Web home page for Reporting Services provides a complete interface for managing reports (as well as other objects such as data sources and models) after they are created. This interface, known as Report Manager, is intended primarily for database administrators, but as a developer you should know about its capabilities for managing and modifying reports.

When you click on a report in Report Manager, you’ll see the report’s data, as shown in Figure 17-14.

Figure 17-14: Report in Report Manager

Figure 17-14: Report in Report Manager

Note that reports in Report Manager open in a tabbed interface. The four tabs allow you to perform various functions:

  • View allows you to see the current data in the report.
  • Properties lets you adjust such things as the report’s name, data source, security credentials, caching, and end-user security.
  • History shows you saved snapshots of the report.
  • Subscriptions lets you create subscriptions to the report. Subscriptions allow you to set up periodic delivery of reports to end users by e-mail or file share.

Printing and Exporting Reports

When viewing reports in the Report Manager, users can print the reports directly from their browser. The print button in the report toolbar utilizes an ActiveX control for client-side printing. The first time this button is clicked on a given computer, the user is prompted to install the ActiveX control, as in Figure 17-15. After that, the standard Windows print dialog box is displayed for the user to select a printer and paper size, etc.

Figure 17-15: ActiveX install prompt.

Figure 17-15: ActiveX install prompt.

Users can also export the report into any of several handy formats. Table 17-1 lists the available export formats.

Export Format Handles
XML Creates a data file in XML format.
CSV Creates a comma-delimited text file of report data.
PDF Creates an Adobe Acrobat file with the formatted report.
MHTML Creates a Web Archive file with the formatted report.
EXCEL Creates a MS Excel spreadsheet with the formatted report.
TIFF Creates a TIFF graphic of the formatted report.
Word Creates a MS Word document with the formatted report.

Table 17-1: Export Formats

SSRS 2008 Tutorial: Exercises

Use Report Builder to create a report from the AdventureWorks2008 data model showing the minimum and maximum order quantity for orders taken by each salesperson in the company. You’ll find the necessary data in the SalesOrderHeader and SalesOrderDetail tables.

Solutions to Exercises

  1. Open a browser window and navigate to http://localhost/Reports (or to the appropriate Report Server URL if you’re not working on the report server).
  2. Click the Report Builder link.
  3. Select the AdventureWorks2008 report model and the table report layout.
  4. Click OK.
  5. Select the Sales Order Header table.
  6. Drag the Sales Person ID field and drop it in the area labeled Drag and Drop Column Fields.
  7. Click on Sales Order Details in the Explorer window.
  8. Expand the Total Order Qty field in the Fields window to show the alternative fields beneath it.
  9. Drag the Min Order Qty field and drop it to the right of the Name field.
  10. Drag the Max Order Qty field and drop it to the right of the Min Order Qty field.
  11. Click where it says Click to Add Title and type Sales Performance.
  12. Click the Run Report button to produce the report shown in Figure 17-15.

Figure 17-15: Sales performance report

Figure 17-15: Sales performance report

Posted in SQL | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »