Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘http’

SQL Server Analysis Services partitions – Create and Manage

Posted by Alin D on May 12, 2011

Partitions are portions of a SQL Server Analysis Services measure group that hold some or all of the measure group’s data.

When a measure group is first created, it contains a single partition corresponding to all the data in your fact table or view. Additional partitions need to be created for any measure group with more than 20 million rows.

Since a majority of corporate databases have far more than 20 million rows in fact tables, you should know how to create partitions and also be aware of good partition design practices.

ou can define partitions using the Business Intelligence Development Studio (BIDS). On the partitions’ tab within your project, simply click the New Partition link near a measure group to open the Partition Wizard. (I won’t cover the steps of the Partition Wizard here because it is fairly simple to follow).

An alternative method to creating new partitions is through XMLA scripts, which is what BIDS does behind the scenes.

You can script an existing partition in SQL Server Management Studio (SSMS) by right-clicking a partition and then choosing Script Partition as CREATE to open a new query window. You will need to edit certain properties such as the partition identifier, its name and the query used for populating the partition.

Here is a sample XMLA for a partition:

<Create xmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
<ParentObject
>
< DatabaseID>Adventure Works DW 2008</DatabaseID>
< CubeID>Adventure Works</CubeID>
<MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
</ParentObject>
<ObjectDefinition>
<Partition xmlns:xsd=http://www.w3.org/2001/XMLSchemaxmlns:xsi=http://www.w3.org/2001/XMLSchema-instancexmlns:ddl2=http://schemas.microsoft.com/analysisservices/2003/engine/2xmlns:ddl2_2=http://schemas.microsoft.com/analysisservices/2003/engine/2/2xmlns:ddl100_100=http://schemas.microsoft.com/analysisservices/2008/engine/100/100>
<ID>Internet_Sales_2001</ID>
<Name>Internet_Sales_2001</Name>
<Source xsi:type=QueryBinding>
<DataSourceID>Adventure Works DW</</#0000FF>DataSourceID>
<QueryDefinition>
SELECT
[dbo].[FactInternetSales].[ProductKey],
[dbo].[FactInternetSales].[OrderDateKey],
[dbo].[FactInternetSales].[DueDateKey],
[dbo].[FactInternetSales].[ShipDateKey],
[dbo].[FactInternetSales].[CustomerKey],
[dbo].[FactInternetSales].[PromotionKey],
[dbo].[FactInternetSales].[CurrencyKey],
[dbo].[FactInternetSales].[SalesTerritoryKey],
[dbo].[FactInternetSales].[SalesOrderNumber],
[dbo].[FactInternetSales].[SalesOrderLineNumber],
[dbo].[FactInternetSales].[RevisionNumber],
[dbo].[FactInternetSales].[OrderQuantity],
[dbo].[FactInternetSales].[UnitPrice],
[dbo].[FactInternetSales].[ExtendedAmount],
[dbo].[FactInternetSales].[UnitPriceDiscountPct],
[dbo].[FactInternetSales].[DiscountAmount],
[dbo].[FactInternetSales].[ProductStandardCost],
[dbo].[FactInternetSales].[TotalProductCost],
[dbo].[FactInternetSales].[SalesAmount],
[dbo].[FactInternetSales].[TaxAmt],
[dbo].[FactInternetSales].[Freight],
[dbo].[FactInternetSales].[CarrierTrackingNumber],
[dbo].[FactInternetSales].[CustomerPONumber]
FROM [dbo].[FactInternetSales]
WHERE OrderDateKey &lt;= ‘20011231’
</QueryDefinition>
</Source>
<StorageMode>Molap</StorageMode>
<ProcessingMode>Regular</ProcessingMode>
<ProactiveCaching>
<SilenceInterval>-PT1S</SilenceInterval>
<Latency>-PT1S</Latency>
<SilenceOverrideInterval>-PT1S</SilenceOverrideInterval>
<ForceRebuildInterval>-PT1S</ForceRebuildInterval>
<AggregationStorage>MolapOnly</AggregationStorage>
<Source xsi:type=ProactiveCachingInheritedBinding>
<NotificationTechnique>Server</NotificationTechnique>
</Source>
</ProactiveCaching>
<EstimatedRows

>1013</EstimatedRows>
<AggregationDesignID>Internet Sales 1</AggregationDesignID>
</Partition>
</ObjectDefinition>
</Create>

Note that when defining an effective partition, specifying the source of the data it will hold is perhaps the most important part.

As a rule of thumb, your partitions should contain between five and 20 million rows of fact data. Furthermore, you should avoid partitioning files greater than 500 MB in size. Partition files are in your Analysis Services installation folder under datadatabase_namecube_namemeasure_group_name.

You can also bind a partition to a table, view or a SQL query. If a relational data warehouse has multiple tables holding fact data, you should bind partitions to such tables as long as each table size is constrained as advised above. If you have a single, large fact table, you could write SQL queries for each Analysis Services partition to retrieve only part of this data.

Views provide a nice alternative for partition binding, especially when testing cubes. For example, if a fact table has millions of rows, processing is going to take a long time. For testing the solution, you don’t necessarily need to load all the data. Instead, create a view that selects only a portion of rows from the large table(s).

Later, when you’re ready to deploy your solution into production, alter your partition(s) definition so that they are bound to appropriate tales, queries or views.

How do you decide what data to include in each partition? SQL Server Analysis Services uses partitions to speed up MDX queries. Each partition contains an XML file that defines the range of dimension member identifiers in a given partition. When an MDX query is submitted, the Analysis Services engine decides what partition files to scan based on the values in each partition’s XML file.

The XML file is created when you process the partition and the file can be found in each partition folder (the file name is info.xml). Don’t try to edit this file – the dimension key references are internal values that cannot be retrieved from SQL Server Analysis Services.

If the data requested by an MDX query is spread across all partitions in your measure group then Analysis Services has no choice but to read every single partition. To see how every partition in the measure group is read, record a SQL Profiler trace when you run such a query. If the requested data is contained in a single, small partition, your query will only have to scan a single partition file.

Reading a single 500 MB file will invariably beat scanning through 200 files of the same size. However, if you have 200 partitions to read, Analysis Services could scan some of them in parallel, and the query won’t necessarily be 200 times slower without proper partitioning.

For best MDX query performance, you should tailor partition design to the pattern of common queries. Most SQL Server Analysis Services solutions start with measure groups partitioned using a date or periodicity dimension, each partition spanning one month’s or one day’s data.

This is a reasonable approach if your queries are typically focused on a given month or several months. But what if your queries examine data across all months and are specific to product categories? In that case, partitioning only by month won’t be optimal.

If you have 10 years worth of data partitioned by month — which is not unusual — each query would have to examine 120 partitions. In this case, query performance could improve if you further partition data by product category dimension.

For example, dealership sales cube users may only be interested in comparing sales across time for a particular product category – cars, trucks or motorcycles, for example. For such cubes, you could create partitions for each month and each product category.

Like any other SQL Server Analysis Services object, partitions have a multitude of properties. Perhaps one of the most frequently discussed is partition slice. This property defines a portion of the measure group data that Analysis Services expects to be exposed by the partition.

Most Analysis Services literature suggests that this property does not have to be set for partitions that use Multidimensional OLAP (MOLAP) storage. While in most situations Analysis Services is smart enough to figure out what dimension members are included in each partition by examining data IDs in info.xml files, to be safe you should always set the partition slice, regardless of the storage mode of your partition.

Partition slices are defined through MDX. This is an example of what a slice definition would look like for a 2001 partition:

<Slice>[Date].[Calendar].[Calendar Year]. &amp;[2001] </Slice>

To further partition data by product categories, a slice definition would look like this:

<Slice> ([Date].[Calendar].[Calendar Year]. &amp; [2001], [Product].[Product Categories].[Category]. &amp; [1]) </Slice>

If you don’t specify a slice for a given dimension, SQL Server Analysis Services assumes that any member of that dimension can be found in the partition.

For example, say you specify a month and product category in partition slice but do not specify the store. Queries that examine sales data by store, but do not include any filters for date or product, may have to search through every partition.

You can also customize the storage mode for each partition. MOLAP storage mode is optimal for data retrieval — but it copies your relational data. If you prefer to leave your data in the relational format without making its copy, then you can use Relational OLAP (ROLAP) mode for infrequently accessed partitions. For example, most recent partitions can utilize MOLAP storage while historical partitions can use ROLAP.

SQL Server Analysis Services has an upper limit on the number of partitions — 2^31-1 = 2,147,483,647 — but cubes that have this many partitions are rare. Don’t be afraid to create as many partitions as needed.

Occasionally, partitions may need to be merged. For example, if a majority of your queries focus on recent data and historical queries are infrequent, you may have a separate partition for each product line and for each day for 30 or 60 days.

Once the data is stale and seldom accessed, you could combine historical partitions into weekly or monthly partitions. To merge partitions using SSMS, right-click on a partition and choose the Merge Partitions option.

This is what the XMLA for merging 2001 and 2002 partitions looks like:

<MergePartitions
xmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
<Sources>
<Source>
<DatabaseID>Adventure Works DW 2008</DatabaseID>
<CubeID>Adventure Works</CubeID>
<MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
<PartitionID>Internet_Sales_2002</PartitionID>
</Source>
</Sources>
<Target>
<DatabaseID>Adventure Works DW 2008</DatabaseID>
<CubeID>Adventure Works</CubeID>
<MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
<PartitionID>Internet_Sales_2001</PartitionID>
</Target>
</MergePartitions>

Be aware that you can copy aggregation design from one partition to another, which I will discuss in more detail in a future article. For now know that if you’re happy with your current aggregation design, you can assign it to a newly created or an existing partition.

If a partition has 500,000 or more estimated rows (you can set estimated numbers of rows in BIDS) and you haven’t defined any aggregations for this partition, then BIDS 2008 warns that your application performance can be improved by adding aggregations.

Partitions reduce the time it takes to process your measure group because each partition only loads a portion of the entire fact table and view. Remember that during processing, SQL Server Analysis Services modifies the SQL query, defining the partition before sending it over to the relational data source.

For example, earlier I showed the definition for Internet Sales 2001 partition within the Internet Sales measure group of an Adventure Works 2008 database.

The query that Analysis Services sends to SQL Server while processing this partition is considerably different from the original query:

SELECT

[dbo_FactInternetSales].[dbo_FactInternetSalesSalesAmount0_0] AS[dbo_FactInternetSalesSalesAmount0_0],
[dbo_FactInternetSales].[dbo_FactInternetSalesOrderQuantity0_1] AS[dbo_FactInternetSalesOrderQuantity0_1],
[dbo_FactInternetSales].[dbo_FactInternetSalesExtendedAmount0_2] AS[dbo_FactInternetSalesExtendedAmount0_2],
[dbo_FactInternetSales].[dbo_FactInternetSalesTaxAmt0_3] AS[dbo_FactInternetSalesTaxAmt0_3],
[dbo_FactInternetSales].[dbo_FactInternetSalesFreight0_4] AS[dbo_FactInternetSalesFreight0_4],
[dbo_FactInternetSales].[dbo_FactInternetSalesUnitPrice0_5] AS[dbo_FactInternetSalesUnitPrice0_5],
[dbo_FactInternetSales].[dbo_FactInternetSalesTotalProductCost0_6] AS[dbo_FactInternetSalesTotalProductCost0_6],
[dbo_FactInternetSales].[dbo_FactInternetSalesProductStandardCost0_7] AS[dbo_FactInternetSalesProductStandardCost0_7],
[dbo_FactInternetSales].[dbo_FactInternetSales0_8] AS[dbo_FactInternetSales0_8],
[dbo_FactInternetSales].[dbo_FactInternetSalesPromotionKey0_9] AS[dbo_FactInternetSalesPromotionKey0_9],
[dbo_FactInternetSales].[dbo_FactInternetSalesSalesTerritoryKey0_10] AS[dbo_FactInternetSalesSalesTerritoryKey0_10],
[dbo_FactInternetSales].[dbo_FactInternetSalesProductKey0_11] AS[dbo_FactInternetSalesProductKey0_11],
[dbo_FactInternetSales].[dbo_FactInternetSalesCustomerKey0_12] AS[dbo_FactInternetSalesCustomerKey0_12],
[dbo_FactInternetSales].[dbo_FactInternetSalesCurrencyKey0_13] AS[dbo_FactInternetSalesCurrencyKey0_13],
[dbo_FactInternetSales].[dbo_FactInternetSalesOrderDateKey0_14] AS[dbo_FactInternetSalesOrderDateKey0_14],
[dbo_FactInternetSales].[dbo_FactInternetSalesShipDateKey0_15] AS[dbo_FactInternetSalesShipDateKey0_15],
[dbo_FactInternetSales].[dbo_FactInternetSalesDueDateKey0_16] AS[dbo_FactInternetSalesDueDateKey0_16]
FROM
(
SELECT
[SalesAmount] AS [dbo_FactInternetSalesSalesAmount0_0],
[OrderQuantity] AS [dbo_FactInternetSalesOrderQuantity0_1],
[ExtendedAmount] AS [dbo_FactInternetSalesExtendedAmount0_2],
[TaxAmt] AS [dbo_FactInternetSalesTaxAmt0_3],
[Freight] AS [dbo_FactInternetSalesFreight0_4],
[UnitPrice] AS [dbo_FactInternetSalesUnitPrice0_5],
[TotalProductCost] AS [dbo_FactInternetSalesTotalProductCost0_6],
[ProductStandardCost] AS[dbo_FactInternetSalesProductStandardCost0_7],
1     AS [dbo_FactInternetSales0_8],
[PromotionKey] AS [dbo_FactInternetSalesPromotionKey0_9],
[SalesTerritoryKey] AS [dbo_FactInternetSalesSalesTerritoryKey0_10],
[ProductKey] AS [dbo_FactInternetSalesProductKey0_11],
[CustomerKey] AS [dbo_FactInternetSalesCustomerKey0_12],
[CurrencyKey] AS [dbo_FactInternetSalesCurrencyKey0_13],
[OrderDateKey] AS [dbo_FactInternetSalesOrderDateKey0_14],
[ShipDateKey] AS [dbo_FactInternetSalesShipDateKey0_15],
[DueDateKey] AS [dbo_FactInternetSalesDueDateKey0_16]
FROM
(
SELECT
[dbo].[FactInternetSales].[ProductKey],
[dbo].[FactInternetSales].[OrderDateKey],
[dbo].[FactInternetSales].[DueDateKey],
[dbo].[FactInternetSales].[ShipDateKey],
[dbo].[FactInternetSales].[CustomerKey],
[dbo].[FactInternetSales].[PromotionKey],
[dbo].[FactInternetSales].[CurrencyKey],
[dbo].[FactInternetSales].[SalesTerritoryKey],
[dbo].[FactInternetSales].[SalesOrderNumber],
[dbo].[FactInternetSales].[SalesOrderLineNumber],
[dbo].[FactInternetSales].[RevisionNumber],
[dbo].[FactInternetSales].[OrderQuantity],
[dbo].[FactInternetSales].[UnitPrice],
[dbo].[FactInternetSales].[ExtendedAmount],
[dbo].[FactInternetSales].[UnitPriceDiscountPct],
[dbo].[FactInternetSales].[DiscountAmount],
[dbo].[FactInternetSales].[ProductStandardCost],
[dbo].[FactInternetSales].[TotalProductCost],
[dbo].[FactInternetSales].[SalesAmount],
[dbo].[FactInternetSales].[TaxAmt],
[dbo].[FactInternetSales].[Freight],
[dbo].[FactInternetSales].[CarrierTrackingNumber],
[dbo].[FactInternetSales].[CustomerPONumber]
FROM [dbo].[FactInternetSales]
WHERE OrderDateKey <= ‘20011231’
) AS [FactInternetSales]
)
AS [dbo_FactInternetSales]

Why should you care what query is sent to SQL Server (or another RDBMS) during partition processing? Because any kind of query hints or SET options that may be valid in a regular SQL statement, might not be supported for partition query definition.

For example, BIDS would allow us to append the SET NOCOUNT ON statement to the beginning of the partition query. If we add this option, however, SQL Server Analysis Services will report a syntax error and fail processing.

You can customize the partition’s processing mode, which defines whether aggregations are calculated during partition processing or by a lazy aggregation thread after processing is complete.

Lastly, you could use storage location property to store data in default or alternate location. This property may come in handy if the disk where partition data is normally stored reaches its storage capacity.

Posted in SQL | Tagged: , , , , , , , , , , , , | Leave a Comment »

SQL Azure Services – A full overview

Posted by Alin D on May 11, 2011

SQL Azure is a database service in the cloud on Microsoft’s Windows Azure platform well-suited for web facing database applications as well as a relational database in the cloud.

The present version mostly deals with the component analogous to a database engine in a local, on-site SQL Server. Future enhancements will host the other services such as Integration Services, Reporting Services, Service Broker, and any other yet-to-be defined services. Although these services are not hosted in the cloud, they can leverage data on SQL Azure to provide support. SQL Server Integration Services can be used to a great advantage with SQL Azure for data movement, and highly interactive boardroom quality reports can be generated using SQL Azure as a backend server.

Infrastructure features

SQL Azure is designed for peak workloads by failover clustering, load balancing, replication, and  scaling out, which are all automatically managed at the data center. SQL Azure’s infrastructure architecture is fashioned to implement all of these features.

High availability is made possible by replicating multiple redundant copies to multiple physical servers thus, ensuring the business process can continue without interruption. At least three replicas are created; a replica can replace an active copy facing any kind of fault condition so that service is assured. At present, the replicated copies are all on the same data center, but in the future, geo-replication of data may become available so that performance for global enterprises may be  improved. Hardware failures are addressed by automatic failover.

Enterprise data centers addressed the scaling out data storage needs, but incurred administrative overheads in maintaining the on-site SQL Servers. SQL Azure offers the same or even better functionality without incurring administrative costs.

How different is SQL Azure from SQL Server?

SQL Azure (version 10.25) may be viewed as a subset of an on-site SQL Server 2008 (version 10.5) both exposing Tabular Data Stream (TDS) for data access using T-SQL. As a subset, SQL Azure supports only some of the features of SQL Server and the T-SQL feature set. However, more of T-SQL features are being added in the continuous upgrades from SU1 to SU5. Since it is hosted on computers in the Microsoft Data Centers, its administration—in some aspects—is different from that of an on-site SQL Server.

SQL Azure is administered as a service, unlike on-site servers. The SQL Azure server is not a SQL Server instance and is therefore administered as a logical server rather than as a physical server. The database objects such as tables, views, users, and so on are administered by SQL Azure database administrator but the physical side of it is administered by Microsoft on its data centers. This abstraction of infrastructure away from the user confers most of its availability, elasticity, price, and extensibility features. To get started with SQL Azure, you must provision a SQL Azure Server on Windows Azure platform as explained in the After accessing the portal subsection, later in the article.

SQL Azure provisioning

Provisioning a SQL Azure Server at the portal is done by a mere click of the mouse and will be ready in a few minutes. You may provision the storage that you need, and when the need changes, you can add or remove storage. This is an extremely attractive feature especially for those whose needs start with low storage requirements and grow with time. It is also attractive to those who may experience increased load at certain times only.

SQL Azure databases lie within the operational boundary of the customer-defined SQL Azure Server; it is a container of logical groupings of databases enclosed in a security firewall fence. While the databases are accessible to the user, the files that store the relational data are not; they are managed by the SQL Azure services.

A single SQL Azure Server that you get when you subscribe, can house a large number (150) of databases, presently limited to the 1 GB and 10 GB types within the scope of the licensing arrangement.

• What if you provision for 1 GB and you exceed this limit?

Then either you provision a server with a 10 GB database or get one more 1 GB database. This means that there is a bit of due diligence you need to do before you start your project.

• What if the data exceeds 10 GB?

The recommendation is to partition the data into smaller databases. You may have to redesign your queries to address the changed schema as cross-data­base queries are not supported. The rationale for using smaller databases and partitioning, lies in its agility to quickly recover from failures (high availabil­ity/fault tolerance) with the ability to replicate faster while addressing the issue of covering a majority of users (small business and web facing). How­ever, responding to the requests of the users, Microsoft may provide 50 GB databases in the future (the new update in June 2010 to SQL Azure Services will allow 50 GB databases).

• How many numbers of SQL Azure Servers can you have?

You can have any number of SQL Azure Servers (that you can afford) and place them in any geolocation you choose. It is strictly one server for one subscription. Presently there are six geolocated data centers that can be chosen. The number of data centers is likely to grow. Best practices dictate that you keep your data nearest to where you use it most, so that performance is optimized. The SQL Azure databases, being relational in nature, can be programmed using T-SQL skills that are used in working with on-site SQL Servers. It must be remembered though, that the SQL Azure Servers are not physical servers but are virtual objects. Hiding their physical whereabouts but providing adequate hooks to them, helps you to focus more on the design and less on being concerned with files, folders, and hardware problems. While the server-related information is shielded from the user, the databases themselves are containers of objects similar to what one finds in on-site SQL Servers such as tables, views, stored procedures, and so on. These database objects are accessible to logged on users who have permission.

After accessing the portal

To get started with SQL Azure Services, you will need to get a Windows Azure platform account, which gives access to the three services presently offered. The first step is to get a Windows Live ID and then establish an account at Microsoft’s Customer Portal. In this article, you will be provisioning a SQL Azure Server after accessing the SQL Azure Portal.

Server-level administration

Once you are in the portal, you will be able to create your server for which you can provide a username and password. You will also be able to drop the server and change the password. You can also designate in which of the data centers you want your server to be located. With the credentials created in the portal, you will become the server-level principal; the equivalent of sa of your server. In the portal, you can also create databases and firewall fences that will only allow users from the location(s) you specify here. The user databases that you create here are in addition to the master database that is created by SQL Azure Services; a repository of information about other databases. The master database also keeps track of logins and their permissions. You could get this information by querying the master for sys.sql_logins and sys.database views.

If you are planning to create applications, you may also copy the connection strings that you would need for your applications, which are available in the portal. You would be typically using the Visual Studio IDE to create applications. However, SQL Azure can be used standalone without having to use the Windows Azure service. Indeed some users may just move their data to SQL Azure for archive.

Once you have provisioned a server, you are ready to create other objects that are needed besides creating the databases. At the portal, you can create a database and set up a firewall fence, but you will need another tool to create other objects in the database.

Setting up firewall rules

Users accessing SQL Azure Server in the Cloud need to go through two kinds of barriers. Firstly, you need to go through your computer’s firewall and then go in through the firewall that protects your SQL Azure Server. The firewall rules that you set up in the portal allow only users from the location you set up for the rule, because the firewall rules only look at the originating IP address.

By default, there are no firewall rules to start with and no one gets admitted. Firewall rules are first configured in the portal. If your computer is behind a Network Address Translation (NAT) then your IP address will be different from what you see in your configuration settings. However, the user interface in the portal for creating a firewall discovers and displays the correct IP address most of the time.

A workaround is suggested here for those cases in which your firewall UI incorrectly displays your IP Address: http://hodentek.blogspot.com/2010/01/firewall-ip-address-setting-in-sql.html.

Firewalls can also be managed from a tool such as SSMS using extended stored procedures in SQL Azure. They can be managed programmatically as well from Visual Studio.

In order for you to connect to SQL Azure, you also need to open your computer’s firewall, so that an outgoing TCP connection is allowed through port 1433 by creating an exception. You can configure this in your computer’s Control Panel. If you have set up some security program, such as Norton Security, you need to open this port for outgoing TCP connections in the Norton Security Suite’s UI.

In addition, your on-site programs accessing SQL Azure Server and your hosted applications on Windows Azure may also need access to SQL Azure. For this scenario, you should check the checkbox Allow Microsoft Services access to this server in the firewall settings page.

The firewall rule only checks for an originating IP address but you need to be authenticated to access SQL Azure. Your administrator, in this case the server-level principal, will have to set you up as a user and provide you with appropriate credentials.

Administering at the database level

SQL Azure database administration is best done from SSMS. You connect to the Database Engine in SSMS, which displays a user interface where you enter the credentials that you established in the portal. You also have other options to connect to SQL Azure (Chapter 3, Working with SQL Azure Databases from Visual Studio 2010 and Chapter 4, SQL Azure Tools). In SSMS, you have the option to connect to either of the databases, the system-created master or the database(s) that you create in the portal. The Object Explorer displays the server with all objects that are contained in the chosen database. What is displayed in the Object Explorer is contextual and the use of the USE statement to change the database context does not work. Make sure you understand this, whether you are working with Object Explorer or query windows. The server-level administrator is the ‘top’ administrator and he or she can create other users and assign them to different roles just like in the on-site SQL Server. The one thing that an administrator cannot do is undertake any activity that would require access to the hardware or the file system.

Role of SQL Azure database administrator

The SQL Azure database administrator administers and manages schema generation, statistics management, index tuning, query optimization, as well as security (users, logins, roles, and so on). Since the physical file system cannot be accessed by the user, tasks such as backing up and restoring databases are not possible. Looking at questions and concerns raised by users in forums, this appears to be one of the less appealing features of SQL Azure that has often resulted in remarks that ‘it is not enterprise ready’. Users want to keep a copy of the data, and if it is a very large database, the advantages of not having servers on the site disappear as you do need a server on-site to back up the data. One suggested recommendation by Microsoft is to use SQL Server Integration Services and bulk copying of data using the SQLCMD utility.

SQL Azure databases

These databases are no different from those of on-site SQL Server 2008 except that the user database node may not have all the nodes of a typical user database that you find in the on-site server. The nodes Database Diagrams, Service Broker, and Storage will be absent as these are not supported. In the case of the system database node, only the master will be present. The master in SQL Azure is a database that contains all information about the other databases.

You can only access the SQL Server with SQL Server Authentication, whereas you have an additional option, Windows Authentication in the case of an on-site SQL Server. All the allowed DDL, DML operations can be programmed using templates available in SSMS. Some of the more common ones, as well as access to the template explorer, which provides a more complete list, are detailed later in the chapter.

User administration and logins

Security is a very important aspect of database administration and it is all the more important in the case of the multi-tenant model used in hosting SQL Azure to control access.

The server-level administrator created in the portal is the top level administrator of SQL Azure Server. While he  can create other databases in the portal, he will have to create other database objects including users and their login, using the SSMS.

Server-level administration

The master database is used to perform server-level administration, as the master database keeps records of all logins and of the logins that have permission to create a database. You must first establish a connection to the master database while creating a New Query to carry out tasks to CREATE, ALTER, or DROP LOGINS or DATABASES. The server-related views: sys.sql_logins and sys.databases can be used to review logins and databases. Whenever you want to change the context of a database, you have to login to the database using the Options in the SSMSs UI, Connect to Server.

Creating a database using T-SQL is extremely simple as there are no file references to be specified and certain other features that are not implemented. The following syntax is for creating a database in an on-site SQL Server instance:

CREATE DATABASE database_name

[ON

[ PRIMARY ] [ <filespec> [ ,…n ]

[ , <filegroup> [ ,…n ] ]

[ LOG ON { <filespec> [ ,…n ] } ]

]

[ COLLATE collation_name ]

[ WITH <external_access_option> ]

]

[;]

To attach a database

CREATE DATABASE database_name

ON <filespec> [ ,…n ]

FOR { ATTACH [ WITH <service_broker_option> ]

| ATTACH_REBUILD_LOG }

[;]

<filespec> ::=

{

(

NAME = logical_file_name ,

FILENAME = { ‘os_file_name’ | ‘filestream_path’ }

[ , SIZE = size [ KB | MB | GB | TB ] ]

[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]

[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB | % ] ]

) [ ,…n ]

}

<filegroup> ::=

{

FILEGROUP filegroup_name [ CONTAINS FILESTREAM ] [ DEFAULT ]

<filespec> [ ,…n ]

}

<external_access_option> ::=

{

[ DB_CHAINING { ON | OFF } ]

[ , TRUSTWORTHY { ON | OFF } ]

}

<service_broker_option> ::=

{

ENABLE_BROKER

| NEW_BROKER

| ERROR_BROKER_CONVERSATIONS

}

Create a database snapshot

CREATE DATABASE database_snapshot_name

ON

(

NAME = logical_file_name,

FILENAME = ‘os_file_name’

) [ ,…n ]

AS SNAPSHOT OF source_database_name

[;]

Look how simple the following syntax is for creating a database in SQL Azure:

CREATE DATABASE database_name

[(MAXSIZE = {1 | 10} GB )]

[;]

However, certain default values are set for the databases, which can be reviewed by issuing the query after creating the database:

SELECT * from sys.databases

Managing logins

After logging in as a server-level administrator to master, you can manage logins using CREATE LOGIN, ALTER LOGIN, and DROP LOGIN statements. You can create a password by executing the following statement for example, while connected to master:

CREATE LOGIN xfiles WITH PASSWORD = ‘@#$jAyRa1’

You need to create a password before you proceed further. During authentication, you will normally be using Login Name and Password, but due to the fact that some tools implement TDS differently, you may have to append the servername part of the fully qualified server name <servername>.<database name>.<windows>.<net> to the Username like in login_name@<servername>. Note that both <login_name> and <login_name>@<servername> are valid in the Connect to Server UI of SSMS.

Connecting to SQL Azure using new login

After creating a new login as described here, you must confer database-level permissions to the new login to get connected to SQL Azure. You can do so by creating users for the database with the login.

Logins with server-level permissions

The roles loginmanager and dbmanager are two security-related roles in SQL Azure to which users may be assigned, that allows them to create logins or create databases. Only the server-level principal (created in the portal) or users with loginmanager role can create logins. The dbmanager role is similar to the dbcreator role and users in this role can create databases using the CREATE DATABASE statement while connected to the master database.

These role assignments are made using the stored procedure sp_addrolemember as shown here for users, user1 and user2. These users are created while connected to master using, for example:

CREATE USER User1 FROM LOGIN ‘login1’;

CREATE USER User2 FROM LOGIN ‘login1’;

EXEC sp_addrolemember ‘dbmanager’, ‘User1’;

EXEC sp_addrolemember ‘loginmanager’, ‘User2’;

Migrating databases to SQL Azure

As most web applications are data-centric, SQL Azure’s databases need to be populated with data before the applications can access the data. More often, if you are trying to push all of your data to SQL Azure, you need tools. You have several options, such as using scripts, migration wizard, bulk copy (bcp.exe), SQL Server Integration Services, and so on. More recently (April 19, 2010 update) Data-tier applications were implemented for SQL Azure providing yet another option for migrating databases using both SSMS as well as Visual Studio.

Troubleshooting

There may be any number of reasons why interacting with SQL Azure may not always be successful. For example, there may just be a possibility that the service level agreement that assures 99.99 percent may not actually be possible, there may be a problem of time-out that is set for executing a command, and so on. In these cases, troubleshooting to find out what might have happened becomes important. Herein, we will see some of the cases that prevent interacting with SQL Azure and the ways and means of troubleshooting the causes.

• Login failure is one of the common problems that one faces in connecting to SQL Azure. In order to successfully login:

°°You need to make sure that you are using the correct SSMS.

°°Make sure you are using SQL Server Authentication in the Connect to Server dialog box.

°°You must make sure your login name and password (type in exactly as you were given by your administrator) are correct. Password is case sensitive. Sometimes you may need to append server name to login name.

°°If you cannot browse the databases, you can type in the name and try.

If your login is not successful, either there is a problem in the login or the database is not available.

If you are a server-level administrator you can reset the password in the portal. For other users the administrator or loginmanager can correct the logins.

• Service unavailable or does not exist.

If you have already provisioned a server, check the following link: http:// http://www.microsoft.com/windowsazure/support/status/servicedashboard. aspx, to make sure SQL Azure Services are running without problem at the data center.

Use the same techniques that you would use in the case of SQL Server 2008 with network commands like Ping, Tracert, and so on. Use the fully qualified name of the SQL Azure Server you have provisioned while using these utilities.

• You assume you are connected, but maybe you are disconnected.

You may be in a disconnected state for a number of reasons, such as:

°°When a connection is idle for an extended period of time

°°When a connection consumes an excessive amount of resources or holds onto a transaction for an extended period of time

°°If the server is too busy

Try reconnecting again. Note that SQL Azure error messages are a subset of SQL error messages.

T-SQL support in SQL Azure

Transact-SQL is used to administer SQL Azure. You can create and manage objects as you will see later in this chapter. CRUD (create, read, update, delete) operations on the table are supported. Applications can insert, retrieve, modify, and delete data by interacting with SQL Azure using T-SQL statements.

As a subset of SQL Server 2008, SQL Azure supports only a subset of T-SQL that you find in SQL Server 2008.

The supported and partially supported features from Microsoft documentation are reproduced here for easy reference.

The support for Transact-SQL reference in SQL Azure can be described in three main categories:

• Transact-SQL language elements that are supported as is

• Transact-SQL language elements that are not supported

• Transact-SQL language elements that provide a subset of the arguments and options in their corresponding Transact-SQL elements in SQL Server 2008

The following Transact-SQL features are supported or partially supported by SQL Azure:

• Constants

• Constraints

• Cursors

• Index management and rebuilding indexes

• Local temporary tables

• Reserved keywords

• Stored procedures

• Statistics management

• Transactions

• Triggers

• Tables, joins, and table variables

• Transact-SQL language elements

• Create/drop databases

• Create/alter/drop tables

• Create/alter/drop users and logins

• User-defined functions

• Views

The following Transact-SQL features are not supported by SQL Azure:

• Common Language Runtime (CLR)

• Database file placement

• Database mirroring

• Distributed queries

• Distributed transactions

• Filegroup management

• Global temporary tables

• Spatial data and indexes

• SQL Server configuration options

• SQL Server Service Broker

• System tables

• Trace flags

T-SQL grammar details are found here: http://msdn.microsoft.com/en-us/ library/ee336281.aspx.

 

Posted in Azure | Tagged: , , , , , , | 1 Comment »

Configure TMG as Cache Proxy

Posted by Alin D on February 20, 2011

One of the primary reasons for deploying a Web proxy server is to reduce bandwidth utilization to the Internet. Microsoft Forefront Threat Management Gateway (TMG) 2010 provides very granular control over cache file management as well as cache content.
This article will help you understand how to make Forefront TMG 2010 caching much more useful and help you analyze the cache behavior of Web services to help you configure the Forefront TMG 2010 cache and monitor its relative effectiveness.

How Caching Works

Forefront TMG 2010 can be configured to maintain a cache of Web objects and fulfill Web requests from its cache. If a request cannot be fulfilled from the cache, Forefront TMG 2010 initiates a new request to the Web server on behalf of the client. After the Web server responds
to the request, Forefront TMG 2010 caches the response and forwards the response to the end user.
By default, caching is not enabled on Forefront TMG 2010 because no disk space is allocated for caching. When caching is enabled, an administrator can define cache rules that determine how content from specified sites is stored and retrieved from Forefront TMG 2010’s cache. Cache rules apply to destinations and do not factor in the source of the request. A destination can be specified as a network entity, domain name sets, and URL sets. For any network, if the TMG Client support is enabled or if Forefront TMG 2010 is configured as the
default gateway for internal computers (SecureNET clients), Forefront TMG 2010 may cache requests for them as well.
If a request is allowed by an access or publishing rule, Forefront TMG 2010 analyzes its cache configuration and cached objects to determine whether a request should be served from the cache or retrieved from the Web server. If the object is not present in the cache, Forefront TMG 2010 checks the Web Chaining rules to determine whether the request needs to be forwarded directly to the requested Web server, to another upstream proxy server, or to an alternate destination.

If the request is present in the cache, Forefront TMG 2010 performs the following steps:
1. Forefront TMG 2010 checks whether the object is valid. If the object is valid, Forefront TMG 2010 retrieves the object from the cache and returns it to the user. Forefront TMG 2010 determines whether the object is valid by performing the following checks:
– The Time to Live (TTL) specified in the source has not expired.
– The TTL configured in the content download job has not expired.
– The TTL configured for the object has not expired.
2. If the object is invalid, Forefront TMG 2010 checks the Web Chaining rules.
3. If a Web Chaining rule matches the request, Forefront TMG 2010 performs the action specified by the Web Chaining rule; for example, route the requested directly to a specified Web server, an upstream proxy, an alternate specified server.
4. If the Web Chaining rule is configured to route the request to a Web server, Forefront TMG 2010 determines whether the Web server is accessible.
5. If the Web server is not accessible, Forefront TMG 2010 determines whether the cache was configured to return expired objects. If the cache was configured to allow Forefront TMG 2010 to return an expired object as long as a specific maximum expiration time hasn’t passed, the object is returned from the cache to the end user.

6. If the Web server is available, Forefront TMG 2010 determines whether the object may be cached depending on whether the cache rule is set to cache the response. If it is, Forefront TMG 2010 caches the object and returns the object to the end user.

Cache Storage

Forefront TMG 2010 can store objects on the local hard disk, and for faster access can store most of the frequently requested objects on both the disk and the RAM. Cached pages can be stored immediately in memory (RAM) to be accessed by end users requesting the Web content. A lazy-writer or buffered-writer approach is used to write pages to the disk. By default, 10 percent of physical memory is allocated for RAM caching. This approach results in faster availability of content. All additional data is stored on the disk. As objects are cached,
Forefront TMG 2010 adds them to the cache content file on the disk. If the cache content file is too full to hold new objects, Forefront TMG 2010 removes older objects from the cache based on age of the object, size of the object, and how frequently the object is accessed.
The recommendation and properties of the cache file are as follows:

– A formatted NTFS partition must be used and the cache drive must be local. When the cache drive is configured, a cache-content file,  Dir1.cdat, is created in the location <drive>:urlcache according to your selections while enabling caching.
– The maximum size for a cache file on a single partition is 64 GB.
– To avoid write conflicts with other Windows activity, you should place the cache file on a physical disk separate from the operating system, Forefront TMG 2010 installation, and page files.
– The size and the location of the cache can be configured on a per-TMG-computer basis.
– Having a large cache is recommended because objects are dropped from the cache when the maximum size if exceeded. However, if the cache is too large (greater than 60 GB), file read-write delays may impact service shutdown and startup.
An administrator can take the following actions for the content to be stored in the cache:
– Specify the type of objects that can be cached.
– Specify how long the objects should remain in cache for objects that do not have a timestamp.
– Specify objects that do not return an OK response (HTTP 200 status code) to be
cached.
– Specify URLs larger than the maximum size limit of the RAM cache not to be stored in the RAM. Because objects cached in memory are retrieved faster than those on the disk, it is important to decide what needs to be cached in the RAM versus what needs to be cached on the disk to prevent excessively large objects from filling the RAM cache. The default maximum size limit is 12,800 bytes.

Forefront TMG 2010 provides caching in two different scenarios. Forefront TMG 2010 caches objects when an internal user makes a Web request to the Internet. This type of caching is known as forward caching. In a Web publishing scenario, Forefront TMG 2010 provides cached content from the internal Web server published by Forefront TMG 2010 to the external users. This type of caching is known as reverse caching.

Forward Caching

Forward caching happens when a user located on a network protected by Forefront TMG 2010 requests Web content through Forefront TMG 2010. Forefront TMG 2010 makes a decision on how to serve that content depending on its availability in the cache.
If the content is available in cache, the process is as follows:

1. The user requests a Web page on the Internet.
2. Forefront TMG 2010 intercepts this request and determines whether the content is
available in cache (RAM cache or disk-based cache).
3. If the content it available, it is returned to the user in accordance with the cache
settings.
The settings determine whether only valid objects can be returned to the
client.
4. The content is then moved to the in-memory cache (RAM cache) as per the cache
settings.
After a period of time, if the content is no longer being requested regularly
(is no longer “popular”), Forefront TMG 2010 copies this content from RAM to
disk-
based cache and flushes it from memory. If another user requests the content
stored in disk-based cache, Forefront TMG 2010 returns it to the RAM cache.
If the content is not available in cache, the process is as follows:
1. The user requests a Web page through Forefront TMG 2010.
2. Forefront TMG 2010 intercepts this request and determines whether the content is available in cache (RAM cache or disk-based cache).
3. If the request is not present in cache or has expired, Forefront TMG 2010 forwards the request to the Web server.
4. The Web server returns the requested information and, based on the cache settings, Forefront TMG 2010 caches the object in its RAM cache where frequently requested content can be stored for fast retrieval.
5. The content is returned to the end user.
6. After a period of time, or if the content is no longer frequently requested, Forefront TMG 2010 copies the content from RAM to disk cache and flushes it from the memory, leaving the only available copy on disk. It is only returned to RAM cache if another user requests the content.

Reverse Caching

Reverse caching happens when Internet users request content from a Web server published by Forefront TMG 2010. The process is as follows:
1. The Internet user requests a Web server published by Forefront TMG 2010.
2. Forefront TMG 2010 intercepts the request and determines whether the content is present in cache. If the content is not available in cache or has expired, Forefront TMG 2010 forwards the request to the published Web server.
3. After the published Web server responds, Forefront TMG 2010, as per its cache settings, stores the content in RAM cache where frequently requested content is stored for fast retrieval.
4. Forefront TMG 2010 returns the content to the Internet user who requested it.
5. After a period of time, if the content is no longer frequently requested, Forefront TMG 2010 copies this content from RAM to disk-based cache and flushes it from the memory.

Cache Rules

Caching behavior in Forefront TMG 2010 can be made granular and more configurable by creating specific cache rules. Each cache rule allows for specific types of content to be processed in different ways, depending on your needs.
By default, when caching is enabled, a default cache rule is created that caches objects based on its default settings. You can create additional caching rules based on your specific caching needs and requirements. Each rule created can contain the following customizations:
– Specify how content retrieved by the rule is returned to the end user. Objects that are valid or expired can be returned and requests can be routed to the Web server if no cached object is found, or they can be dropped entirely.
– Specify type of content to be cached. By default, objects are only stored in the cache when source or request headers in the HTTP request indicate caching. You can define content that should not be cached or content that should be cached even if the source or request headers do not indicate caching.
– Specify a maximum size for the objects that the rule caches.

– Specify whether SSL responses should be cached. This does not apply to forward caching for sites exempted from HTTPS inspection.
– Enable caching of HTTP objects.
– Specify how long objects should remain in cache. Unless the source specifies an expiration time, HTTP objects remain in cache according to the TTL setting in the rule. The TTL is based on a percentage of time that has passed since the object was created or modified.
– Enable caching of FTP objects and specify a TTL to indicate how long FTP objects should remain in the cache.
You can see some of these settings on the Advanced tab when creating a cache rule (see figure).

image

 

Caching Compressed Content

HTTP compression reduces file size by using algorithms to eliminate redundant data. Most common Web-related file types can safely be compressed. HTTP compression uses the industry standard GZIP and Deflate algorithms. These algorithms compress static files, and
optionally perform on-demand compression of dynamically generated responses before sending them over the network. These same algorithms are again used to decompress the
static files and dynamic responses on an HTTP 1.1–supported client. A client that is configured to use HTTP 1.1 may request compressed content from a Web server. Web servers indicate in their responses whether they support compression.

HTTP compression in Forefront TMG 2010 is a global HTTP policy setting. It applies to all HTTP traffic that passes through Forefront TMG 2010 to or from a specified network or network object, rather than to traffic handled by a specific rule. HTTP compression is provided
by two Web filters:

– Compression Filter This filter is responsible for compression and decompression of HTTP requests and responses. This filter has a high priority, and is high in the ordered list of Web filters. This is because the filter is responsible for decompression. If you  choose to enable inspection of compressed content, decompression must take place
before any other Web filters inspect the content.

– Caching Compressed Content Filter This filter is responsible for caching of compressed content and serving a request from the compressed content in the cache.
This filter has the lowest priority, and is low in the ordered list of Web filters, because caching can take place after all other filters have handled the content. Caching and compression together provide a more efficient way to serve compressed requests. Content is cached on Forefront TMG 2010 in one of the following ways:
– Compressed Content is requested in compressed format and cached in compressed format.
– Uncompressed Content is requested in uncompressed format and cached in uncompressed format.
– Uncompressed and incompressible If an end user requests compressed content and it arrives at the cache uncompressed, it is stored in the cache as incompressible.
The next time the request for the same compressed content is received, Forefront TMG 2010 will recognize that the content is incompressible and serves it from the cache uncompressed rather than from the Internet. Content that is inspected is also stored as uncompressed.
Compression settings applied to cache content are persistent. Therefore, if you want compression configuration changes to be reflected in the cached content, the cache on Forefront TMG 2010 needs to be cleared (this is discussed later in this chapter). Another
important thing to remember is that when inspecting incoming compressed content,
Forefront TMG 2010 Web filters must have access to the  decompressed content. When compression is enabled, content is cached in compressed form. The Caching Compressed Content Web filter is responsible for decompressing the content so that the other Web filters can inspect the decompressed version of the cache content. When Forefront TMG 2010 receives a request for the cached content, the Compression Web filter recompresses the content before sending it to the end user. This in turn increases the amount of time it takes to transfer the content from Forefront TMG 2010 to the user.

Posted in Security | Tagged: , , , , , , | Leave a Comment »

System Enhances Security for Microsoft Security Essentials and the TMG Firewall

Posted by Alin D on February 2, 2011

You have probably heard the old chestnut that says ?the only way to secure your network is to pull all the cables out of their Ethernet jacks?. While such an answer is given a bit tongue in cheek, and that wouldn?t work in today?s wireless world anyway, the idea is that if one computer is able to communicate with another computer (or computing device, as tablets and smart phones are making up an increasing percentage of the network connected devices compared to PCs) there is always a chance that one of the machines is going to compromise the other machine. Of course, you don?t necessarily need a wired Ethernet network or even wi-fi to accomplish the compromise, as exploits can also be carried from one device to another using removable media (CDs, DVDs, floppies, USB keys, etc). As for wireless, given the ever-increasing use of wi-fi and cell phone carrier-based ?G? (3G, 4G) networks, the future of network attacks is likely to be ?over the air?.

Regardless of the medium over which the attacks take place, however, it?s clear that we all need protection against such attacks. In the earlier days of networking, the miscreants and malcontents used simple denial of service attacks to create havoc and interrupt business activities. As the Internet bad guys got more sophisticated (and maybe a bit more egotistical) they began to spend their ?valuable? time defacing web sites. Then, when the Internet became more mainstream and more actual business started taking place over the global network, criminal types (individuals and organizations) were able to take advantage of the opportunities to make money off their crimes, and co-opt the skills of hackers to quietly compromise devices so that they could steal data (including businesses? trade secrets and individuals? identities) without detection. As networking itself has grown more complex, so have the threats against which we need to protect ourselves.

There are a couple of ways you can protect yourself against today?s more sophisticated attackers. One effective method is to put technologies on both the network and on the host operating systems that can inspect the nature of the traffic destined for the host that needs the protection. Network based approaches are typically located on network firewalls that lie on the edge of the network, such as the Forefront Threat Management Gateway (TMG) firewall, and that works well for many purposes.

The problem with network based protections (when used alone) is that they are typically focused on a particular ?choke point? on the network ? commonly at the network edge or right behind the network edge. For example, the TMG firewall is often placed behind a traditional so-called hardware firewall ? this configuration allows the dedicated firewall device to do some basic and simplistic processing of old-style network layer attacks, and this reduces the processing overhead required of the TMG firewall, which is capable of doing much more sophisticated protection than the ?hardware? firewall.

Since you can?t cost-effectively put TMG firewalls at every juncture on your network, it?s important that you also put network layer protection on the host operating systems themselves. This gives you protection against both Internet based attacks (typically initiated by the user who tries to access malicious content, since Internet hosts rarely can initiate connections to private network hosts without the hosts being previously compromised) and attacks sourced from other hosts on the same corporate network, where those other hosts have been compromised and then automatically seek to spread an infection or other type of system exploit.

This is where the Network Inspection System (NIS) comes in. NIS is Microsoft?s response to the growing threat of network based attacks. NIS was first introduced with the Forefront Threat Management Gateway (TMG) firewall to enable sophisticated network based IDS/IPS at the edge of the corporate network. Recently, Microsoft extended the significant protection enabled by having NIS on the TMG firewall by including NIS with the most recently released version of Microsoft Security Essentials (version 2.0).

With an increasing number of application layer attacks hitting the scene and new ones being released on a regular basis, Microsoft Research designed the Generic Application-level Protocol Analyzer (GAPA).GAPA includes a protocol specification language and an inspection engine that operates on network streams and captures. GAPA makes it possible to create network protocol parsers faster and reduces the development time required to create the parsers, and these parsers are used extensively by the NIS.

One of the key problems networks security professionals have to deal with is that attackers usually create and launch exploits for disclosed vulnerabilities more quickly than application vendors can deploy security updates. In addition to the time it takes to develop the security updates, you have to factor in the time it takes for most administrators to test these security fixes before deploying them, to realize that networks often go unprotected for a significant amount of time after an exploit becomes known.

This delay leaves computers vulnerable to attacks and exploitation, during a period when the bad guys know all about the exploit and are scrambling to take advantage of it before it?s patched. The Network Inspection System reduces these windows of vulnerability between disclosures and patch deployment from weeks to a few hours. That is a significant improvement and can make the difference between a network that goes on working and one brought down to its knees by a zero day attack.

The vulnerability research and the signature development are done by Microsoft?s Malware Protection Center (MMPC). For security bulletins that fix publicly-unknown vulnerabilities, NIS helps provide immediate protection shortly after the details of the vulnerability become publicly known. The MMPC also rapidly responds to zero day incidents by releasing NIS signatures for them as soon as they are known. At this time, NIS signatures help detect exploits of vulnerabilities in Microsoft products only. While this might be interpreted as a limitation when implemented with the TMG firewall (since the TMG firewall is intended to protect the entire network), it isn?t a problem at all when the NIS is included with Microsoft Security Essentials, since MSE can only be installed on Windows computers.

The Network Inspection System uses three types of signatures when performing its IDS/IPS functions:

Vulnerability-based. These signatures will detect most variants of exploits against a given vulnerability. Exploit-based. These signatures will detect a specific exploit of a given vulnerability. Policy-based. These are signatures that are generally used for auditing purposes and are developed when neither vulnerability nor an exploit-based signature can be written.

We don?t know at this time which of these signature types is used with Microsoft Security Essentials, as there is no public information available on this currently. We do know that the TMG firewall uses all three types of signatures and they are enabled by default. Note that while the policy-based signatures might not seem to be useful in that they are not providing IPS protection, they do provide IDS functionality so that you?re aware of the possible compromise of a system on your network and you can then initiate your incident response plan as needed.

The Network Inspection System can analyze a number of application layer protocols for potential exploit code. While there are literally thousands of application layer protocols in use today, only a small handful represent a significant percentage of all network traffic. Because there are time constraints for any endeavor of this kind, Microsoft has focused on the following popular protocols:

HTTP DNS SMB SMB2 NetBIOS MSRPC SMTP POP3 IMAP MIME

After reviewing that list of protocols, I think you can agree that these are the most commonly used ? and most commonly abused ? protocols used on the Internet and on intranets today.

Microsoft evaluates the need for supporting additional protocols on a continuous basis and will add that support as necessary if there is an exploit using some other protocol that needs to be protected against. If additional protocols are enabled, the support for that protocol will be included when the signature update takes place. In most cases, support for a new protocol is added because there is a significant vulnerability or exploit that uses that protocol, and therefore in most cases, the MMPC will also release a signature that uses that new protocol. If you are using a TMG firewall, you can see which protocols the signatures use by grouping the NIS signatures by protocol. The TMG firewall obviously gives you more information and more control, as a network administrator, whereas MSE is designed for the use of consumers and very small businesses and thus is intended to work more transparently.

The Network Inspection System is a network level IDS/IPS system that uses the GAPA language to enable fast development of NIS signatures. NIS inspects network traffic for a collection of the most commonly used protocols on both the Internet and the intranet, and assesses that traffic for potentially malicious code. NIS is currently available with the TMG firewall, where it inspects traffic to and from the Internet, and with Microsoft Security Essentials, where it inspects traffic moving into or out of Windows hosts. NIS depends on the Windows Filtering Platform, which means it?s available for Windows Server 2008 and above and Windows Vista and above. NIS focuses on Windows based vulnerabilities, which makes it the ideal IDS/IPS for Windows hosts. All these features enable both the TMG firewall and Microsoft Security Essentials to provide an exceptional level of security for networks that contain Windows servers and client systems.

Posted in Security | Tagged: , , , , , , | Leave a Comment »

Best Practices to Speed Up Your Site in ASP.NET

Posted by Alin D on January 12, 2011

A sluggish website not only reduces your site’s ability to attract and retain visitors it also looks unprofessional and will not be attractive to advertisers. If your site is hosted on the cloud it is also an indication that the site is not properly optimized and will be consuming too many system resources which increases your hosting bill.

These 10 best practices will ensure that your site is running at full speed.

1. Caching

If someone put a gun to my head and gave me one way to improve a site’s performance it would be caching. Caching buffers your pages in the server’s memory and so avoids database/server roundtrips, resulting in faster response times and reduced server loads (and thus reduced hosting charges). If your site gets less than 1000 pageviews per day you probably wont see much speed improvement from caching. Your CMS should support caching (if not you should move on ASAP) and you should turn it on. For WordPress, WP Super Cache is the standard and still the best caching plugin.
Still not convinced? Take a look at the response times for a site where I was turning caching on and off, all the spikes in the graph are when I had caching turned off.

2.Use a Content Delivery Network (CDN)

CDN’s were previously very expensive to use with high monthly minimum’s but several new entrants (notably AWS CloudFront and RackSpace Cloud Files) have made it affordable for smaller sites to use CDNs. CDN’s work by caching files at different geographical (‘edge’) locations and therefore reducing the lag users who are geographically far away from the server experience. The larger the files, the greater the benefit of using a CDN, video files are a must but you should also consider hosting images and even css/js files on a CDN.

3. Place Scripts at The Bottom of the Page

Html pages load sequentially, so a reference to an external javascript files placed above the body tag will need to be loaded before the page’s content. This can make the page loading appear sluggish.
Even scripts such as javascript files for AJAX operations used on the page can be placed below the content. This will mean users cannot interact with the site content when it is first displayed, but research by Facebook showed that users prefer to see content as soon as possible even if it cannot be interacted with.
Scripts such as Google Analytics code should always be placed at the bottom of pages.

4. Use External CSS Files

Pages which are heavy with Html load a lot slower than pages which reference CSS for styles, positioning etc. For starters CSS is more compact than Html for positioning and styling page elements, furthermore if it is placed in an external file it will be cached on the user’s browser so that it will not need to be loaded for subsequent page loads in the same session.
You should always review the outputted source code (ie view the rendered page and then look at the source code by right clicking and selecting View Source or similar command) to look for Html which can be replaced by CSS.

5. Host Files on a Separate Domain

A quirk of most browsers is that they can only make a two simultaneous requests to a domain. If your page has several images and external files (such as CSS or javascript files) these must be queued and requested two at a time. Hosting images or files on a separate domain (or a subdomain) allows the browser to make most simultaneous requests and render the page quicker. This is also an additional reason to use a CDN as the files will always be on a separate domain.

6. Minimize Hits to the Database

Why is caching so effective? It reduces requests to both the server for processing and to the database for data. Database operations are very expensive in terms of resources and so you should review your code to minimize hits to the database. COnsider the following:

  • Ensure that you minimize the number of connections opened to a database in a visit. Once a database connection is opened, try to perform as many database operations as possible and then close the connection. Do not open and close connections several times unless this is necessary.
  • Open database connections as late as possible and close them as early as possible – ie don’t do additional processing that isnt necessary whilst the connection is open, grab the data, close the connection and then do additional processing.
  • Review your SQL code to ensure it is efficient – several SQL operations can be performed in one operation and it is not always necessary to execute separate insertand select statements. Ensure you don’t use  Select * in your queries – always select just the columns you need.

7. Optimize Images

Maybe it is just my perception, but I always remember this being first on the list of a site optimization checklist and now it hardly even features in a top twenty. I guess there are more interesting things to talk about, but optimizing images is still a major factor in reducing page loads. Use GIF where-ever possible, screenshots and logos are normally a must for GIF. GIF’s max colors in a file is 256 but you can normally reduce this without compromising quality and reduce the filesize. JPGs should be examined even more closely as they are normally larger files, you can set quality anywhere between 1-100 and normally around 65  is an acceptable compromise between quality and size. Similarly you should optimize PNG files for size vs quality. Photoshop’s Save For Devices tool is invaluable for this purpose.
Also, always use the height and width attributes of the <img> tag as this speed up loading, but do not scale and image using these attributes – these should be the actual image size.

8. GZip Everything

Gzip is a popular compression protocol and about 95% of all web traffic is on browsers that support Gzip decompression. Therefore it is safe to say the all your Http traffic delivered to the user’s browser should be Gzip compressed. Putting  Accept-Encoding: gzip, deflate in your http header informs Apache that the content is Gzip compressed and it will direct that to the user’s browser with an appropriate header to instruct the browser of the content type and to decompress it. Most site’s will Gzip http pages but css and js files should also be Gzip compressed.

9. Reduce the Number of Http Requests

Http requests are expensive as they require round trips to the web server, consider the following to minimize the number of requests:

  • Combine files – dont have several .css or .js files unless this is strictly necessary,  just combine these into a single large file.
  • Combine images – if you have several images for your site’s design consider combining the images into a single image file and then selecting a segment by using the CSSbackground-image and background-position properties. The CSS Sprites tutorial on A List Apart is a good tutorial.

10. Examine Your Final Page

This isnt really a separate best practice in itself, but it is probably the most effective way to find page bloat. Too often the optimization is done by reviewing all the sever-side files but when a page is generated there are often new items added to the header which werent noticed or were added by a script that was installed. Starting at the optimization at the final generated page is the best technique. This will also identify html bloat, such as empty tag (empty <span> and <div> tags are infamous for populating pages).

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Exchange 2010 SSL Certificates

Posted by Alin D on January 10, 2011

If it is your first time working with Exchange Server 2010 then you will quickly realise that you need to learn about the relationship between Exchange 2010 and SSL certificates.

SSL Requirements in Exchange Server 2010

Prior to Exchange Server 2007 an Exchange server could be deployed and by default would not require SSL for any of its communications.  The wise move when deploying Exchange Server 2003 (for example) was to install an SSL certificate for IIS and use SSL for external access (eg Outlook Web Access and ActiveSync).  However this was not mandatory and it certainly isn’t unusual to encounter legacy Exchange environments that allow external access over insecure HTTP connections.

For Exchange Server 2007, and then again with Exchange Server 2010, Microsoft changed the default behaviour so that SSL was required for many services, even when they are only used internally.  So a newly installed Exchange Server 2010 server that hosted the Client Access server role would have SSL enforced for services such as:

  • Outlook Web App
  • ActiveSync
  • Exchange Web Services
  • Outlook Anywhere

The administrator could disable that SSL requirement, but again the wise move is to protect Exchange Server 2010 communications with SSL encryption rather than allow them over insecure HTTP connections.

Because the SSL requirement is on by default the Exchange 2007 and Exchange 2010 servers are installed with a self-signed SSL certificate.  This self-signed certificate does the job of securing any SSL connections, however because it is self-signed no connecting clients or devices will trust it, so it is unsuitable for long term use.  The administrator needs to install a new SSL certificate for Exchange Server 2010.

Exchange 2010 SAN Certificates

Administrators who have installed SSL certificates for Exchange before may be familiar with the general process involved.  But they might not be familiar with the SSL certificate requirements for Exchange Server 2010.

In short, Exchange Server 2010 will respond to connections on multiple names.  These names typically include:

  • The fully qualified domain name (FQDN) of the Exchange server, eg ex2.exchangeserverpro.net
  • DNS aliases for external access, eg mail.exchangeserverpro.net or webmail.exchangeserverpro.net
  • The Autodiscover name of each SMTP namespace in the organization, eg autodiscover.exchangeserverpro.net

This makes a standard single-name SSL certificate unsuitable.  Instead, Exchange Server 2010 must be installed with a SAN certificate.

SAN stands for Subject Alternative Names and is a type of SSL certificate that has an attribute that stores additional names for the SSL certificate to apply to.  For example, here is the certificate used to secure Outlook Web App for Microsoft.

Exchange 2010 SSL Certificate

Exchange 2010 SSL certificate used by Microsoft

In Exchange Server 2007 it was possible to make a series of configuration changes so that a single-name SSL certificate would work.  However these changes were complex, especially in larger environments, and the cost to perform and maintain them (in terms of administrative time spent) far outweighed the cost of a genuine SAN certificate from a commercial Certificate Authority.

Where to Buy SSL Certificates for Exchange 2010

There are lots of commercial Certificate Authorities to choose from when buying an SSL certificate for your Exchange Server 2010 servers.  These include:

Each of these providers is different in terms of pricing, licensing and support, so I do recommend that you take a close look and compare them in detail before making a decision.

However my recommendation is to use Digicert’s Unified Communications Certificate, which I like for the pricing, generous licensing terms, and support such as unlimited reissues of the certificate (if for example you forget one of the alternative names the first time you request the certificate).

How to Install an SSL Certificate for Exchange Server 2010

The process to acquire and install an Exchange 2010 SSL certificate is as follows.

  1. Generate a new certificate request using the wizard built in to Exchange Server 2010
  2. Submit the certificate request to your chosen Certificate Authority
  3. Install the issued SSL certificate on the Exchange 2010 server
  4. Assign the new SSL certificate to the appropriate services on the Exchange 2010 server

The complete process is demonstrated in this article:

Posted in Exchange | Tagged: , , , , , , | 1 Comment »

Azure CDN (Content Delivery Network) In Your App

Posted by Alin D on December 8, 2010

The Windows Azure CDN is a convenient way for application developers to minimize latency that geographically dispersed application users will experience. As of  as of February 2010, the Azure CDN is currently in CTP and incurring no charges.

Who Needs a CDN?

Application users that are geographically far away from the data center where a file they are attempting to access resides will experience a lag due to the distance the data has to travel to reach their client. CDN’s attempt to solve this issue by caching copies of the file in various geographic locations across the globe, the user attempting to access the file will be served from the nearest data center to minimize the latency. Latency is an especially acute problem for large files that are streamed to users (notably video files), but with the declining cost of CDN’s they are more and more commonly used to deliver smaller static files – the theme files for AzureSupport.com (such as the logo and css files) are all served via the Amazon CloudFront CDN.

Introducing the Windows Azure CDN

The Azure CDN features 20 edge locations (geographic points were the cached data is served from) for blobs which are in public containers.

Using The Azure CDN

The Windows Azure CDN is tightly integrated into Azure Storage, to enable it simply navigate to the appropriate storage account in the Azure developer portal and click Enable CDN, this will turn on the CDN for that storage account although there may be a delay of up to 60 minutes before it is available for use.
Once the CDN is functioning the only change will be the URL from which your blobs are accessible, the URL will be in the following format  – http://<guid&gt;.vo.msecnd.net/<container name>/.

TTL – Time to Live

One issue you will have to contend with is the caching of the blobs. If you need to update the blob (ie replace the file with an updated version) the new version will not be accessable by users until the cache has expired and the Azure CDN refreshes the cache from the actual blob in the storage container. The expiry time for the cache is know as the TTL (Time To Live). The default for Azure is 72 hours, however this can be manually adjusted by setting the CloudBlob.Attributes.Properties.CacheControl property of the CloudBlob object in your code. It should be noted that setting the TTL to a short time will reduce the effectiveness of the CDN as the CDN will only reduce the request time when the blob is cached at an edge location.
It should be noted that blobs which are deleted will still be available at any edge location it is cached at until the TTL expires.

Registering a Custom Domain for Azure CDN

If the default URL does not appeal, you can register a custom domain as the CDN URL for that storage account (note that only one custom domain can be registered per CDN endpoint). To register a custom domain, navigate to your storage account, if it has the Azure CDN enabled it will have a URL in the format http://<guid&gt;.vo.msecnd.net/ listed under Custom Domains. Click ‘Manage’ to the right of this URL. On the Custom Domain for CDN Endpoint page, enter the custom domain and click Generate Key. The next page will show a CNAME record which you will have to enter for the domain and then Validate to ensure you are the domain’s admin or owner.
Once the domain’s ownership has been validated you can create a CNAME record for your chosen domain and then point that to your default Azure CDN URL, this will then allow you to use your custom domain for all URL’s.

Posted in Azure | Tagged: , , , | Leave a Comment »

Backup and Restore in SharePoint 2010

Posted by Alin D on November 3, 2010

SharePoint 2010 Backups are a major advance over the backup facilities in SharePoint 2007. SharePoint can now easily maintain full backups and even granular backups, where only selected data can be restored. For this article I will demonstrate how to use PowerShell to backup and restore SharePoint applications as well as the standard the SharePoint web interface.
Content Database Recovery
In SharePoint 2007, my preference for SharePoint backup  was to perform a database backup. It was easy to maintain and backup/restore over many SharePoint farms, and it needed little administrative work in case when I had to move the sites to different farms, or restore broken farms to new hardware. To the experienced SharePoint administrator, with the content databases only, it didn’t take long to restore the content. But the real disadvantage of SharePoint backup via database backup was not in the hard recovery, where you had to deal with the entire server failures – the pain was recovering a single file from a document library that was accidentally deleted. The only way to recover that file using the database backups, was to fully restore it on different farm, attach it to any temporary SharePoint site and browse to the file to restore it from the library. Now this is no longer an issue, since we have Unattached Content Database Recovery.
Unattached Content Database Recovery allows you to browse through the content database using SharePoint, navigate to a list or document library and save the list or library into .cmp file which can easily be moved between sites.

SharePoint 2010 backup restore

Unattached Content Database Recovery

To be able to use this feature, you simply need to attach the content database backup with a different name to a SQL server instance and then use the screen (shown above) to retrieve any content from the attached database backup. You do not need to attach that database to any site inside a farm anymore. This feature is a part of the Granular Backup functionality in SharePoint. You can store that way entire content databases using MS SQL management tools or decide to backup only the site collection.
Backup and Restore feature
The stsadm powered backup and restore features from SharePoint 2007 are still here, but they have changed and now they can be helpful rather than painful for the administrator.

SharePoint 2010 backup restore

Backup in Central Administration

Central Administration now checks if the required services are running and gives the choice to backup the entire farm, or only selected applications or services. Backup and restore from Central Administration performs the same operations as the command line and stsadm tool. When you choose the content to backup, you may decide to perform full or differential backup and provide the path to the storage location for the backup.

Backup Using StsAdm

To perform the backup using StsAdm (which may be helpful when configuring automatic backups) you should first type in the backup command itself, to see what it is capable of.
To be able to use StsAdm, open the CMD window and browse to C:Program FilesCommon FilesMicrosoft SharedWeb Server Extensions14bin directory and type in the stsadm command for backup as shown below:

SharePoint 2010 backup restore

The backup command for SharePoint 2010 is very similar to 2007 but now it is now much more efficient. For example, you can now easily backup search indexes and restore which was impossible in SharePoint 2007 and was a big issue for companies that had millions of documents and needed instant search access in their production environments.

Sample command to perform a farm backup with stsadm:
Stsadm –o backup –directory \myServerbackup -backupmethod full

Using Powershell to Perform SharePoint Backups

To be able to use SharePoint commands with PowerShell, we first need to load the SharePoint snap-ins into the PowerShell console.

SharePoint 2010 backup restore

Now in a similar way to using StsAdm , we can test what PowerShell is capable of by typing the get-help Backup-SPFarm command. The PowerShell cmdlet allows for creating a backup of an individual database, web application or the entire farm. It is able to backup individual components or entire farm settings at once, but if we want to backup the site contents, we should check the Backup-SPSite cmdlet instead.

Examples of Backup-SPFarm:

Make full backup of configuration settings of the farm to the Backup directory:
Backup-SPFarm -Directory \file_servershareBackup -BackupMethod full -ConfigurationOnly

Backup the farm using 10 Backup threads and place the farm backup into C:Backup folder. (the force parameter will force the backup even if there’s is insufficient space for the backup files:
Backup-SPFarm -Directory C:Backup -BackupMethod full -BackupThreads 10 -Force

Backup a specific site collection to the C:Backup folder:
Backup-SPSite http://server_name/sites/site_name -Path C:Backupsite_name.bak

Summary

When I was using SharePoint Server 2007 the out-of-the-box backup and restore mechanisms were inadequate for backing up a large corporate SharePoint installation. Instead I stayed with the trusted the database backup, however there is always a need minimize the effort to accomplish a task, and and SharePoint 2010’s backup and restore are now enterprise ready.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , | Leave a Comment »

Setup FTP 7.5 on Windows Server 2008 and publish through Forefront TMG 2010

Posted by Alin D on November 2, 2010

Introduction

Microsoft has created a new FTP service that has been completely rewritten for Windows Server® 2008. This new FTP service incorporates many new features that enable web authors to publish content better than before, and offers web administrators more security and deployment options.

  • Integration with IIS 7: IIS 7 has a brand-new administration interface and configuration store, and the new FTP service is tightly integrated with this new design. The old IIS 6.0 metabase is gone, and a new configuration store that is based on the .NET XML-based *.config format has taken its place. In addition, IIS 7 has a new administration tool, and the new FTP server plugs seamlessly into that paradigm.
  • Support for new Internet standards: One of the most significant features in the new FTP server is support for FTP over SSL. The new FTP server also supports other Internet improvements such as UTF8 and IPv6.
  • Shared hosting improvements: By fully integrating into IIS 7, the new FTP server makes it possible to host FTP and Web content from the same site by simply adding an FTP binding to an existing Web site. In addition, the FTP server now has virtual host name support, making it possible to host multiple FTP sites on the same IP address. The new FTP server also has improved user isolation, now making it possible to isolate users through per-user virtual directories.
  • Custom authentication providers: The new FTP server supports authentication using non-Windows accounts for IIS Managers and .NET Membership.
  • Improved logging support: FTP logging has been enhanced to include all FTP-related traffic, unique tracking for FTP sessions, FTP sub-statuses, additional detail fields in FTP logs, and much more.
  • New supportability features: IIS 7 has a new option to display detailed error messages for local users, and the FTP server supports this by providing detailed error responses when logging on locally to an FTP server. The FTP server also logs detailed information using Event Tracing for Windows (ETW), which provides additional detailed information for troubleshooting.
  • Extensible feature set: FTP supports extensibility that allows you to extend the built-in functionality that ships with the FTP service. More specifically, there is support for creating your own authentication and authorization providers. You can also create providers for custom FTP logging and for determining the home directory information for your FTP users.

Additional information about new features in FTP 7.5 is available in the “What’s New for Microsoft and FTP 7.5?” topic on Microsoft’s http://www.iis.net/ web site.

This document will walk you through installing the new FTP service and troubleshooting installation issues.

Installing FTP for IIS 7.5

IIS 7.5 for Windows Server 2008 R2

  1. On the taskbar, click Start, point to Administrative Tools, and then click Server Manager.
  2. In the Server Manager hierarchy pane, expand Roles, and then click Web Server (IIS).
  3. In the Web Server (IIS) pane, scroll to the Role Services section, and then click Add Role Services.
  4. On the Select Role Services page of the Add Role Services Wizard, expand FTP Server.
  5. Select FTP Service. (Note: To support ASP.NET Membership or IIS Manager authentication for the FTP service, you will also need to select FTP Extensibility.)
  6. Click Next.
  7. On the Confirm Installation Selections page, click Install.
  8. On the Results page, click Close.

Installing FTP for IIS 7.0

Prerequisites

The following items are required to complete the procedures in this section:

  1. You must be using Windows Server 2008.
  2. Internet Information Services 7.0 must be installed.
  3. If you are going to manage the new FTP server by using the IIS 7.0 user interface, the administration tool will need to be installed.
  4. You must install the new FTP server as an administrator. (See the Downloading and Installing section for more.)
  5. IIS 7.0 supports a shared configuration environment, which must be disabled on each server in a web farm before installing the new FTP server for each node. Note: Shared configuration can be re-enabled after the FTP server had been installed.
  6. The FTP server that is shipped on the Windows Server 2008 DVD must be uninstalled before installing the new FTP server.
Downloading the right version for your server

There are two separate downloadable packages for the new FTP server; you will need to download the appropriate package for your version of Windows Server 2008:

Launching the installation package

You will need to run the installation package as an administrator. This can be accomplished by one of the following methods:

  1. Logging in to your server using the actual account named “Administrator”, then browsing to the download pages listed above or double-clicking the download package if you have saved it to your server.
  2. Logging on using an account with administrator privileges and opening a command-prompt by right-clicking the Command Prompt menu item that is located in the Accessories menu for Windows programs and selecting “Run as administrator”, then typing the appropriate command listed below for your version of Windows to run the installation:
    • 32-bit Windows Versions:
      • msiexec /i FTP 7_x86_75.msi
    • 64-bit Windows Versions:
      • msiexec /i FTP 7_x64_75.msi

Note: One of the above steps is required because the User Account Control (UAC) security component in the Windows Vista and Windows Server 2008 operating systems prevents access to your applicationHost.config file. For more information about UAC, please see the following documentation:

The following steps walk you through all of the required settings to add FTP publishing for the Default Web Site.

Walking through the installation process
  1. When the installation package opens, you should see the following screen. Click Next to continue.
    alt
  2. On the next screen, click the I accept check box if you agree to the license terms, and then click Next.
    alt
  3. The following screen lists the installation options. Choose which options you want installed from the list, and then click Next.
    • Common files: this option includes the schema file. When installing in a shared server environment, each server in the web farm will need to have this option installed.
    • FTP Publishing Service: this option includes the core components of the FTP service. This option is required for the FTP service to be installed on the server.
    • Managed Code Support: this is an optional component, but features that use managed extensibility require this option before using them, such as ASP.NET and IIS manager authentication. Note: This feature cannot be installed on Windows Server 2008 Core.
    • Administration Features: this option installs the FTP 7 management user interface. This requires the IIS 7.0 manager and .NET framework 2.0 to be installed. Note: This feature cannot be installed on Windows Server 2008 Core.
      alt
  4. On the following screen, click Install to begin installing the options that you chose on the previous screen.
    alt
  5. When installation has completed, click Read notes to view the FTP README file, or click Finish to close the installation dialog.
    alt

Note: If an error occurs during installation, you will see an error dialog. Refer to the Troubleshooting Installation Issues section of this document for more information.

Troubleshooting Installation Issues

When the installation of FTP 7 fails for some reason, you should see a dialog with a button called “Installation log”. Clicking the “Installation log” button will open the MSI installation log that was created during the installation. You can also manually enable installation logging by running the appropriate command listed below for your version of Windows. This will create a log file that will contain information about the installation process:

  • 32-bit Windows Versions:
    • msiexec /L FTP 7.log /I FTP 7_x86_75.msi
  • 64-bit Windows Versions:
    • msiexec /L FTP 7.log /I FTP 7_x64_75.msi

You can analyze this log file after a failed installation to help determine the cause of the failure.

Clicking the “Online information” button on the error dialog will launch the “Installing and Troubleshooting FTP 7.5” document in your web browser.

Note: If you attempt to install the downloaded package on an unsupported platform, the following dialog will be displayed:

Known Issues in This Release

The following issues are known to exist in this release:

  1. While Web-based features can be delegated to remote managers and added to web.config files using the new IIS 7 configuration infrastructure, FTP features cannot be delegated or stored in web.config files.
  2. The icon of a combined Web/FTP site may be marked with a question mark even though the site is currently started with no error. This occurs when a site has a mixture of HTTP/FTP bindings.
  3. After adding an FTP publishing to a Web site, clicking the site’s node in the tree view of the IIS 7 management tool may not display the FTP icons. To work around this issue, use one of the following:
    • Hit F5 to refresh the IIS 7 management tool.
    • Click on the Sites node, then double-click on the site name.
    • Close and re-open the IIS 7 management tool.
  4. When you add a custom provider in the site defaults, it shows up under each site. However, if you attempt to remove or modify the settings for a custom provider at the site-level, IIS creates an empty <providers /> section for the site, but the resulting configuration for each site does not change. For example, if the custom provider is enabled in the site defaults, you cannot disable it at the site-level. To work around this problem, open your applicationHost.config file as an administrator and add a <clear/> element to the list of custom authentication providers, the manually add the custom provider to your settings. For example, in order to add the IIS Manager custom authentication provider, you would add settings like the following example:
    <ftpServer>
    <security>
    <authentication>
    <customAuthentication>
    <providers>
    <clear />
    <add name=”IisManagerAuth” enabled=”true” />
    </providers>
    </customAuthentication>
    </authentication>
    </security>
    </ftpServer>
  5. The following issues are specific to the IIS 7.0 release:
    • The FTP service that is shipped on the Windows Server 2008 DVD should not be installed after the new FTP service has been installed. The old FTP service does not detect that the new FTP service has been installed, and running both FTP services at the same may cause port conflicts.
    • IIS 7 can be uninstalled after the new FTP service has been installed, and this will cause the new FTP service to fail. If IIS is reinstalled, new copies of the IIS configuration files will be created and the new FTP service will continue to fail because the configuration information for the new FTP service is no longer in the IIS configuration files. To fix this problem, re-run the setup for the new FTP service and choose “Repair”.

To Add FTP Site from the IIS management Console

Creating a New FTP Site Using IIS 7 Manager

The new FTP service makes it easy to create new FTP sites by providing you with a wizard that walks you through all of the required steps to create a new FTP site from scratch.

Step 1: Use the FTP Site Wizard to Create an FTP Site

In this first step you will create a new FTP site that anonymous users can open.

Note: The settings listed in this walkthrough specify “%SYSTEMDRIVE%inetpubftproot” as the path to your FTP site. You are not required to use this path; however, if you change the location for your site you will have to change the site-related paths that are used throughout this walkthrough.

  1. Open IIS 7 Manager. In the Connections pane, click the Sites node in the tree.
  2. As shown in the image below, right-click the Sites node in the tree and click Add FTP Site, or click Add FTP Site in the Actions pane.
    • Create a folder at “%SystemDrive%inetpubftproot”
    • Set the permissions to allow anonymous access:
      1. Open a command prompt.
      2. Type the following command:
        ICACLS "%SystemDrive%inetpubftproot" /Grant IUSR:R /T
      3. Close the command prompt.

    alt

  3. When the Add FTP Site wizard appears:
    • Enter “My New FTP Site” in the FTP site name box, then navigate to the %SystemDrive%inetpubftproot folder that you created in the Prerequisites section. Note that if you choose to type in the path to your content folder, you can use environment variables in your paths.
    • When you have completed these items, click Next.

    alt

  4. On the next page of the wizard:
    • Choose an IP address for your FTP site from the IP Address drop-down, or choose to accept the default selection of “All Unassigned.” Because you will be using the administrator account later in this walk-through, you must ensure that you restrict access to the server and enter the local loopback IP address for your computer by typing “127.0.0.1” in the IP Address box. (Note: If you are using IPv6, you should also add the IPv6 localhost binding of “::1”.)
    • Enter the TCP/IP port for the FTP site in the Port box. For this walk-through, choose to accept the default port of 21.
    • For this walk- through, do not use a host name, so make sure that the Virtual Host box is blank.
    • Make sure that the Certificates drop-down is set to “Not Selected” and that the Allow SSL option is selected.
    • When you have completed these items, click Next.

    alt

  5. On the next page of the wizard:
    • Select Anonymous for the Authentication settings.
    • For the Authorization settings, choose “Anonymous users” from the Allow access to drop-down, and select Read for the Permissions option.
    • When you have completed these items, click Finish.

    alt

Summary

You have successfully created a new FTP site using the new FTP service. To recap the items that you completed in this step:

  1. You created a new FTP site named “My New FTP Site”, with the site’s content root at “%SystemDrive%inetpubftproot”.
  2. You bound the FTP site to the local loopback address for your computer on port 21, and you chose not to use Secure Sockets Layer (SSL) for the FTP site.
  3. You created a default rule for the FTP site to allow anonymous users “Read” access to the files.
Step 2: Adding Additional FTP Security Settings

Creating a new FTP site that anonymous users can browse is useful for public download sites, but web authoring is equally important. In this step, you add additional authentication and authorization settings for the administrator account. To do so, follow these steps:

  1. In IIS 7 Manager, click the node for the FTP site that you created earlier, then double-click FTP Authentication to open the FTP authentication feature page.
    alt
  2. When the FTP Authentication page displays, highlight Basic Authentication and then click Enable in the Actions pane.
    alt
  3. In IIS 7 Manager, click the node for the FTP site to re-display the icons for all of the FTP features.
  4. You must add an authorization rule so that the administrator can log in. To do so, double-click the FTP Authorization Rules icon to open the FTP authorization rules feature page.
    alt
  5. When the FTP Authorization Rules page is displayed, click Add Allow Rule in the Actions pane.
    alt
  6. When the Add Allow Authorization Rule dialog box displays:
    • Select Specified users, then type “administrator” in the box.
    • For Permissions, select both Read and Write.
    • When you have completed these items, click OK.
      alt
Summary

To recap the items that you completed in this step:

  1. You added Basic authentication to the FTP site.
  2. You added an authorization rule that allows the administrator account both “Read” and “Write” permissions for the FTP site.
Step 3: Logging in to Your FTP Site

In Step 1, you created an FTP site that anonymous users can access, and in Step 2 you added additional security settings that allow an administrator to log in. In this step, you log in anonymously using your administrator account.

Note: In this step log in to your FTP site using the local administrator account. When creating the FTP site in Step 1 you bound the FTP site to the local loopback IP address. If you did not use the local loopback address, use SSL to protect your account settings. If you prefer to use a separate user account instead of the administrator account, set the correct permissions for that user account for the appropriate folders.

Logging in to your FTP site anonymously
  1. On your FTP server, open a command prompt session.
  2. Type the following command to connect to your FTP server:FTP localhost
  3. When prompted for a user name, enter “anonymous”.
  4. When prompted for a password, enter your email address.

You should now be logged in to your FTP site anonymously. Based on the authorization rule that you added in Step 1, you should only have Read access to the content folder.

Logging in to your FTP site using your administrator account
  1. On your FTP server, open a command prompt session.
  2. Type the following command to connect to your FTP server:FTP localhost
  3. When prompted for a user name, enter “administrator”.
  4. When prompted for a password, enter your administrator password.

You should now be logged in to your FTP site as the local administrator. Based on the authorization rule that you added in Step 2 you should have both Read and Write access to the content folder.

Summary

To recap the items that you completed in this step:

  1. You logged in to your FTP site anonymously.
  2. You logged in to your FTP site as the local administrator.

Publish FTP site from Forefront TMG 2010

Let’s begin

Note:
Keep in mind that the information in this article is based on a release candidate version of Microsoft Forefront TMG and is subject to change.

A few months ago, Microsoft released RC 1 (Release Candidate) of Microsoft Forefront TMG (Threat Management Gateway), which has a lot of new exciting features.

One of the new features of Forefront TMG is its ability to allow FTP server traffic through the Firewall in both directions. It does this in the form of Firewall access rules for outbound FTP access and with server publishing rules for inbound FTP access through a published FTP Server. This server is located in your internal network or a perimeter network, also known as a DMZ (if you are not using public IP addresses for the FTP Server in the DMZ).

First, I will show you the steps you will need to follow in order to create a Firewall rule which will allow FTP access for outgoing connections through TMG.

FTP access rule

Create a new access rule which allows the FTP protocol for your clients. If you want to allow FTP access for your clients, the clients must be Secure NAT or TMG clients, also known as the Firewall client in previous versions of Forefront TMG.

Please note:
If you are using the Web proxy client, you should note that through this type of client only FTP read-only access is possible and you cannot use a classic FTP client for FTP access, only a web browser FTP access is possible with some limitations.

The following picture shows a FTP access rule.

alt
Figure 1: FTP access rule

A well-known pitfall beginning with ISA Server 2004 is, that by default, after the FTP access rule has been created, the rule only allows FTP read-only access for security purposes in order to prevent users from uploading confidential data outside the organization without permission. If you want to enable FTP uploads you have to right click on the FTP access rule, and then click Configure FTP.

alt
Figure 2: Configure FTP

All you have to do is remove the read only flag, wait for the new FTP connection to be established, and the users get all the necessary permissions to carry out FTP uploads.

alt
Figure 3: Allow write access through TMG

FTP Server publishing

If you want to allow incoming FTP connections to your internal FTP servers, or to FTP servers located in the DMZ, you have to create server publishing rules if the network relationship between the external and the internal/DMZ network is NAT. If you are using a route network relationship, it is possible to use Firewall rules to allow FTP access.

To gain access to an FTP server in your internal network, create an FTP server publishing rule.

Simply start the new Server Publishing Rule Wizard and follow the instructions.

As the protocol you have to select the FTP Server protocol definition which allows inbound FTP access.

alt
Figure 4: Publish the FTP-Server protocol

The standard FTP Server protocol definiton uses the associated standard protocol which can be used for inspection by NIS, if a NIS signature is available.

alt
Figure 5: FTP-Server protocol properties

The Standard FTP Server protocol definition allows FTP Port 21 TCP for inbound access and the protocol definition is bound to the FTP access filter which is responsible for the FTP protocol port handling (FTP Data and FTP control port).

alt
Figure 6: FTP ports and FTP Access Filter binding

Active FTP

One of the changes in Microsoft Forefront TMG is that the Firewall does not allow Active FTP connections by default anymore, for security reasons. You have to manually allow the use of Active FTP connections. It is possible to enable this feature in the properties of the FTP access filter. Navigate to the system node in the TMG management console, select the Applicaton Filters tab, select the FTP Access filter and in the task pane click Configure Selected Filter (Figure 7).

alt
Figure 7: FTP Access filter properties

In the FTP access filter properties select the FTP Properties tab and enable the checkbox Allow Active FTP Access and save the configuration to the TMG storage.

alt
Figure 8: Allow Active FTP through TMG

FTP alerts

Forefront TMG comes with a lot of predefined alert settings for several components and events. One of them is the alert function for the FTP Filter Initialization Warning. This alert informs Administrator when the FTP filter failed to parse the allowed FTP commands.

alt
Figure 9: Configure FTP alert options

The alert actions are almost the same as in ISA Server 2006, so there are no new things to explain for experienced ISA Administrators.

Conclusion

In this article, I showed you some ways to allow FTP access through the TMG Server. There are some pitfalls for a successful FTP implementation. One of the pitfalls is that since the introduction of ISA Server 2004, allowing FTP write access through the Firewall and the other pitfall is new to Forefront TMG. Forefront TMG does not allow Active Mode FTP connections by default, so you have to manually activate this feature if you really need this type of special configuration.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

How to Migrate Content in SharePoint 2010

Posted by Alin D on November 1, 2010

In this tutorial I will describe  how to migrate  content between SharePoint site collections and even entire farms. Very often you need to clone, copy or just create testing environment out of your production sites so the good migration procedures are essential to every SharePoint administrator. The best tool for content migration is the export / import functionality.

For the purposes of this tutorial I  created a Site Collection from the Team Site template called IT Department, I then created the Technical Support site within the site collection. Now, to migrate the Technical Support site to a different site collection.

SharePoint Content Migration

IT Department site with Technical Support sub site visible in the top-menu.

Using stsadm, you can move the site using the export/import commands. Let’s try and export the site, using the stsadm tool. If you want, you can provide additional parameters, which can be very important to the
end result of the migration. To see the options type in “stsadm –o export” without additional parameters.

SharePoint Content Migration

stsadm –o export command parameters visible in the cmd.exe window

One of the mostly used oprions is includeusersecurity and versions. If you add the includeusersecurity parameter, the site will be exported with the security asssignments. If you don’t want to lose the security settings after the migration, you should add this parameter to the command. The versions parameter is used when you have versioning enabled and you don’t want the default content migration, which will export only the last major
version. Using the default settings you may find that some documents are missing from the migration, if they had no major versions at all (only minor versions, which is often the case). I would suggest to using the versions 2 parameter if you only want the latest document versions migrated. now we can revise the command and include security and the latest document version in the script:

stsadm –o export –url http://sp2010/techsupport -filename C:techsupportsite -includeusersecurity -versions 2

When this command is executed,  stsadm will start the logging to the cmd window and the log file in the exported file directory. If all the content is successfully exported, you will see the end result with no warnings or errors. If  encounter any warnings or errors, you can use the export log file to find and fix the issues before migrating  the site.

SharePoint Content Migration

End of the Export command with the successful report

Now, we will use the import command to restore the exported site on a different site collection. You can also use this file to import the site with content on different web applications, or even different farms, but then you will have to provide the same set of site collection features, web parts and security accounts.

In SharePoint 2010, you can import to a URL that doesn’t yet exist and you do not have to create the empty site just to import the content later.

In the command line, enter the below command:

stsadm –o import –url http://sps2010/sites/newsitecollection/techsupport –filename techsupportsite.cmp –includeusersecurity

After a successful import we can now check the migrated site to identify any issues. In this example there were no issues with the sample content but it this will obviously depend on the migrated content. If you are using third party SharePoint objects, you should deploy them on new application prior to using  the import command.

SharePoint Content Migration

Site migrated to new site collection using stsadm tool.

In SharePoint 2010 we can also use PowerShell for content migration.  With the new PowerShell capability, we can export/import a site, list or document library.

To do this, open the SharePoint 2010 Management Shell from  Start – All Programs:

SharePoint Content Migration

SharePoint 2010 Management Shell location in the Start Menu

In the Windows PowerShell window, I am going to use the Export-SPWeb command. Export-SPWeb is capable of exporting a site collection, web application, list or library. Using stsadm (which was the only way in SharePoint 2007) we are not able to export just a list or library since  the lowest level is the site.

Let’s look at an example which corresponds to the stsadm we used before:

Export-SPWeb –Identity http://sps2010/techsupport -Path C:techsupportPS -IncludeUserSecurity -IncludeVersions CurrentVersion

The  syntax here is very similar but there are  additional parameters  that can be set using this command. Some commands are also extended or work differently than their corresponding stsadm commands. Below is a description of the more important cmdlets:

  • Identity: Specifies the URL of the web application you wish to export. You can also use the application GUID instead of the URL.
  • Path: Specifies the location and name of the exported file. If you use NoFileCompression flag, you will have to specify a directory in the Path string parameter.
  • Confirm: Prompts for user permission before executing script. Useful when scripting sets of management commands that require user input.
  • ItemURL: Specifies the URL of the SharePoint object you want to export. In this parameter you can specify the lower objects, like lists and document libraries.
  • UseSqlSnapshot: This will tell the export mechanism to use the direct database snapshot to collect the data, instead of using the default SharePoint web-based mechanisms.
  • WhatIf: Use this parameter when you only want to check what would happen if you attempted to export a  SharePoint object. It will not run any code, but only show  the message and describe the effect of running the cmdlet.

As with the stsadm, where there’s an export – there is also an import. To use the exported cmdlet file, use the Import-SPWeb cmdlet.

Example usage:

Import-SPWeb http://sps2010/sites/newsitecollection/techsupport -Path C:techsupportPS.cmp –Overwrite

With the PowerShell Export-SPWeb and Import-SPWeb you can migrate sites, site collections, lists and document libraries between entirely different SharePoint 2010 farms.

That’s about it in terms of SharePoint content migration. You may think I didn’t show you the Central Administration and Unattached Database Recovery functionality… but why? Because it’s simply the export/import we’ve just learned, but with the  GUI added to the web site. Of course using PowerShell gives us much, much more than using Central Administration.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »