Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘http’

New Features in IIS 8

Posted by Alin D on September 10, 2012

Each new version of Microsoft Internet Information Services is a little like a new installment in a novel series where each book comes several years apart, but proves to be well worth waiting for.
IIS 8, which comes with Windows Server 2012, has new features aimed at those who are putting together large-scale Web hosts. But one of the nice side effects of those big-scale features is how they dial down to smaller hosts and individual servers as well.

CPU throttling: the next generation

IIS 7 has a CPU throttling function that prevents unruly sites from gobbling up too much CPU. Unfortunately, it has an all-or-nothing flavor to it which makes it less useful than it ought to be.

First, when you have throttling set for a site, the only form of throttling available is to kill the site process entirely for a certain length of time. You can set the CPU threshold and kill length, but it means the site is completely disabled for whatever that length of time is. There is no native way to configure IIS to have a site only use 90% of CPU for processor X (or all processors) at any time.

Second, IIS 7’s CPU throttling is bound to a given application pool. This isn’t so bad if you have a separate pool for each website, and that by itself isn’t a bad idea if you have the CPU cores to throw at such a proposition. (Even if you only have one core, it’s still not a bad idea for low-CPU sites.) But if you have multiple sites that share the same application pool, they all go offline if CPU throttling kicks in for only one of those sites.

IIS 8’s solution to all this is to add two new actions to the way CPU throttling works: Throttle and Throttle under load. Throttle caps CPU for a given worker process, and any child processes spawned by that worker as well. Throttle under load allows a site to use as much CPU as is available, but will throttle that process back if it starts competing for CPU with other processes.

This allows throttling to be done without killing the process wholesale, and adds that much more flexibility in multi-tenancy environments. You can run that many more sites side-by-side, with or without setting explicit processor affinities for their worker processes, and not have them stomp all over each other.

Another refinement is the Application Initialization Module, which allows a site to accept requests for pages and respond with a friendly message while the site code itself is still being spun up. This feature can keep people from pounding on their browser’s refresh button when a change to a library forces a recompile.

SSL improvements

I’ve never liked the way IIS has handled SSL. “Clunky” and “cumbersome” are two of the less vitriolic adjectives I’ve used to describe the whole process of adding and managing SSL certificates to IIS. Thankfully, IIS 8 has three major new improvements to its handling of SSL.

Centralized certificate management. IIS 7 forces you to import each certificate into each instance of IIS, which is a headache if you’re managing a whole farm’s worth of servers. IIS 8 lets you create a Central Certificate Store, or CCS. This allows all the certificates needed across your farm to be placed in a single location. The name of the certificate file can be used to automatically map and bind the certificate to the domain in question, and multiple-domain certificates are also supported through this scheme (you just make multiple copies of the certificate and rename it appropriately).

Server Name Indication support (for using SSI with host headers). Not long ago I discovered for myself, the very hard and painful way, how difficult it is to have SSI on a server where multiple sites share a single IP address and use host headers. A new technology named Server Name Indication allows SSI to be used on sites that can only be reached via host headers, but it requires both a server and a client that can support it. IIS 8 fixes the “server” end of the equation, and most recent browsers provide support (with one glaring exception being any version of IE on Windows XP).

Scalability. Thanks to improvements in how certificates are loaded and managed, SSL-enabled sites now scale far more efficiently, and you can support many more of them on the same hardware (up to thousands). On the same note, IIS’s handling of configuration files (*.config) have been reworked for the same kind of scale.

FTP Logon Restrictions and Dynamic IP Restrictions

I have a theory: because Microsoft has had such a brutal trial by fire as far as security goes, they’re being forced to constantly think about new and more proactive ways to make their server products secure. To that end, two new security features in IIS help provide short- and long-term blocking of IP addresses for both the HTTP and FTP services. Granted, nobody uses IP blocking as any kind of permanent solution to a security issue, but such a feature is still useful to have as a stopgap against attacks.

Dynamic IP restrictions allow you to configure IIS to block access from IP addresses that break rules about how many requests they attempt to make in a given period of time, or when using more than a certain number of concurrent requests. What’s more, the denial of the connection can be done via more than just returning the standard 403.6 Forbidden error IIS 7 would use in such circumstances. The server can be set to return a 401 (Unauthorized), 403 (Forbidden), 404 (Not Found), or simply terminate the HTTP connection without even returning an error. A special proxy mode also allows inspection of the HTTP headers for the X-Forwarded-For header, as a way to find the source of traffic that may be forwarded through a proxy.

FTP logon attempt restrictions allow you to lock out people if they try to make multiple failed attempts to log into the FTP server. The lockout period is normally 30 seconds, but it can be set to anything you want, and the number of attempts is also flexible. This works a little like the tarpitting/graylisting systems used to keep spammers from overwhelming mail servers: only those who are clearly trying to barge their way in get stalled.

Multicore scaling and NUMA awareness

Most of the IIS servers I’ve dealt with have been minimal affairs, with a handful of low-traffic sites that share space on a single- or dual-core server. I know full well, though, that some IIS setups are sprawling affairs: dozens of cores or sockets, many gigabytes of RAM, and all sorts of other high-end hardware features to make sysadmins cry with joy.

IIS hasn’t always made the best possible use of some of those high-end features. Multiple cores, for instance: according to Microsoft, one of the problems of adding more cores is that after a while it actually hurts performance in some setups “because the cost of memory synchronization out-weighs the benefits of additional cores.” In other words, the processing power of those extra cores is offset by the overhead required to keep memory synchronized with a given core.

IIS 8 has a new feature to compensate for this problem: Non-Uniform Memory Architecture (NUMA) awareness. NUMA servers dedicate specific areas of physical memory to specific processors, with crossbar or bus systems to allow processors to talk to the memory that’s not “local” to that processor. Both operating systems and software have to be written to take proper advantage of NUMA, but the benefits include being able to do things like hot-swap failing memory modules and, most importantly, not succumb to the ways poor memory architecture can kill performance.

IIS 8 supports NUMA and multicore scaling in several different ways:

Workload partitioning. IIS can pool worker processes by creating the same number of worker processes as there are NUMA nodes, so each process runs on its own node — essentially, the “Web garden” approach. You can also have IIS set up multiple worker processes and have the workloads distributed across each node automatically.

Node optimization. The default method IIS uses for picking a node when a given worker process starts is to choose the one that has the most available memory, since that’s typically going to yield the best results. IIS can also default to letting Windows Server itself make that decision, which is useful if you have other NUMA-aware server-level apps running on the same hardware.

Thread affinity. IIS can use one of two methods to figure out how to pair threads with NUMA nodes. The “soft affinity” method allows a given thread to be switched to another NUMA node if that node has more CPU to spare. The “hard affinity” method picks a node for a thread and leaves it there.
WebSockets

This long-in-development technology is supposed to fix one of the major limitations of HTTP since its inception: you can’t really keep a connection open indefinitely between the client and the server for real-time full-duplex communication. IIS 8 adds WebSocket support, although it has to be installed as part of the “Application Development” package of add-ons when setting up IIS 8 (along with, say, ASP.NET 4.5)

Conclusion

While many of the new IIS 8 features are clearly designed for those hosting whole server farms or massive multi-core setups, there’s a lot here to appeal to folks on other tiers as well. I know that if I ever upgrade the server I’m using to a multicore model—even just 2-4 cores—I’ll have a whole raft of new IIS features I can use to make it all the more worth my investment.

Posted in Windows 2012 | Tagged: , , , , , , , , , , , , | Leave a Comment »

New improvements in Microsoft IIS 8

Posted by Alin D on May 6, 2012

ach new version of Microsoft Internet Information Servicesis a little like a new installment in a novel series where each book comes several years apart, but proves to be well worth waiting for.

IIS 8, which comes with Windows Server 8’s beta edition, has new features aimed at those who are putting together large-scale Web hosts. But one of the nice side effects of those big-scale features is how they dial down to smaller hosts and individual servers as well.

CPU throttling: the next generation

IIS 7 has a CPU throttling function that prevents unruly sites from gobbling up too much CPU. Unfortunately, it has an all-or-nothing flavor to it which makes it less useful than it ought to be.

First, when you have throttling set for a site, the only form of throttling available is to kill the site process entirely for a certain length of time. You can set the CPU threshold and kill length, but it means the site is completely disabled for whatever that length of time is. There is no native way to configure IIS to have a site only use 90% of CPU for processor X (or all processors) at any time.

Second, IIS 7’s CPU throttling is bound to a given application pool. This isn’t so bad if you have a separate pool for each website, and that by itself isn’t a bad idea if you have the CPU cores to throw at such a proposition. (Even if you only have one core, it’s still not a bad idea for low-CPU sites.) But if you have multiple sites that share the same application pool, they all go offline if CPU throttling kicks in for only one of those sites.

IIS 8’s solution to all this is to add two new actions to the way CPU throttling works: Throttle and Throttle under load. Throttle caps CPU for a given worker process, and any child processes spawned by that worker as well. Throttle under load allows a site to use as much CPU as is available, but will throttle that process back if it starts competing for CPU with other processes.

This allows throttling to be done without killing the process wholesale, and adds that much more flexibility in multi-tenancy environments. You can run that many more sites side-by-side, with or without setting explicit processor affinities for their worker processes, and not have them stomp all over each other.

Another refinement is the Application Initialization Module, which allows a site to accept requests for pages and respond with a friendly message while the site code itself is still being spun up. This feature can keep people from pounding on their browser’s refresh button when a change to a library forces a recompile.

SSL improvements

I’ve never liked the way IIS has handled SSL. “Clunky” and “cumbersome” are two of the less vitriolic adjectives I’ve used to describe the whole process of adding and managing SSL certificates to IIS. Thankfully, IIS 8 has three major new improvements to its handling of SSL.

Centralized certificate management. IIS 7 forces you to import each certificate into each instance of IIS, which is a headache if you’re managing a whole farm’s worth of servers. IIS 8 lets you create a Central Certificate Store, or CCS. This allows all the certificates needed across your farm to be placed in a single location. The name of the certificate file can be used to automatically map and bind the certificate to the domain in question, and multiple-domain certificates are also supported through this scheme (you just make multiple copies of the certificate and rename it appropriately).

Server Name Indication support (for using SSI with host headers). Not long ago I discovered for myself, the very hard and painful way, how difficult it is to have SSI on a server where multiple sites share a single IP address and use host headers. A new technology named Server Name Indication allows SSI to be used on sites that can only be reached via host headers, but it requires both a server and a client that can support it. IIS 8 fixes the “server” end of the equation, and most recent browsers provide support (with one glaring exception being any version of IE on Windows XP).

Scalability. Thanks to improvements in how certificates are loaded and managed, SSL-enabled sites now scale far more efficiently, and you can support many more of them on the same hardware (up to thousands). On the same note, IIS’s handling of configuration files (*.config) have been reworked for the same kind of scale.

FTP Logon Restrictions and Dynamic IP Restrictions

I have a theory: because Microsoft has had such a brutal trial by fire as far as security goes, they’re being forced to constantly think about new and more proactive ways to make their server products secure. To that end, two new security features in IIS help provide short- and long-term blocking of IP addresses for both the HTTP and FTP services. Granted, nobody uses IP blocking as any kind of permanent solution to a security issue, but such a feature is still useful to have as a stopgap against attacks.

Dynamic IP restrictions allow you to configure IIS to block access from IP addresses that break rules about how many requests they attempt to make in a given period of time, or when using more than a certain number of concurrent requests. What’s more, the denial of the connection can be done via more than just returning the standard 403.6 Forbidden error IIS 7 would use in such circumstances. The server can be set to return a 401 (Unauthorized), 403 (Forbidden), 404 (Not Found), or simply terminate the HTTP connection without even returning an error. A special proxy mode also allows inspection of the HTTP headers for the X-Forwarded-For header, as a way to find the source of traffic that may be forwarded through a proxy.

FTP logon attempt restrictions allow you to lock out people if they try to make multiple failed attempts to log into the FTP server. The lockout period is normally 30 seconds, but it can be set to anything you want, and the number of attempts is also flexible. This works a little like the tarpitting/graylisting systems used to keep spammers from overwhelming mail servers: only those who are clearly trying to barge their way in get stalled.

Multicore scaling and NUMA awareness

Most of the IIS servers I’ve dealt with have been minimal affairs, with a handful of low-traffic sites that share space on a single- or dual-core server. I know full well, though, that some IIS setups are sprawling affairs: dozens of cores or sockets, many gigabytes of RAM, and all sorts of other high-end hardware features to make sysadmins cry with joy.

IIS hasn’t always made the best possible use of some of those high-end features. Multiple cores, for instance: according to Microsoft, one of the problems of adding more cores is that after a while it actually hurts performance in some setups “because the cost of memory synchronization out-weighs the benefits of additional cores.” In other words, the processing power of those extra cores is offset by the overhead required to keep memory synchronized with a given core.

IIS 8 has a new feature to compensate for this problem: Non-Uniform Memory Architecture (NUMA) awareness. NUMA servers dedicate specific areas of physical memory to specific processors, with crossbar or bus systems to allow processors to talk to the memory that’s not “local” to that processor. Both operating systems and software have to be written to take proper advantage of NUMA, but the benefits include being able to do things like hot-swap failing memory modules and, most importantly, not succumb to the ways poor memory architecture can kill performance.

IIS 8 supports NUMA and multicore scaling in several different ways:

Workload partitioning. IIS can pool worker processes by creating the same number of worker processes as there are NUMA nodes, so each process runs on its own node — essentially, the “Web garden” approach. You can also have IIS set up multiple worker processes and have the workloads distributed across each node automatically.

Node optimization. The default method IIS uses for picking a node when a given worker process starts is to choose the one that has the most available memory, since that’s typically going to yield the best results. IIS can also default to letting Windows Server itself make that decision, which is useful if you have other NUMA-aware server-level apps running on the same hardware.

Thread affinity. IIS can use one of two methods to figure out how to pair threads with NUMA nodes. The “soft affinity” method allows a given thread to be switched to another NUMA node if that node has more CPU to spare. The “hard affinity” method picks a node for a thread and leaves it there.

WebSockets

This long-in-development technology is supposed to fix one of the major limitations of HTTP since its inception: you can’t really keep a connection open indefinitely between the client and the server for real-time full-duplex communication. IIS 8 adds WebSocket support, although it has to be installed as part of the “Application Development” package of add-ons when setting up IIS 8 (along with, say, ASP.NET 4.5).

Final thoughts

While many of the new IIS 8 features are clearly designed for those hosting whole server farms or massive multi-core setups, there’s a lot here to appeal to folks on other tiers as well. I know that if I ever upgrade the server I’m using to a multicore model—even just 2-4 cores—I’ll have a whole raft of new IIS features I can use to make it all the more worth my investment.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

How to use Windows Network Load Balancing to load balance Exchange 2010

Posted by Alin D on November 13, 2011

When administrators consider load balancing their Exchange 2010 installations, they often turn to dedicated — and frequently expensive — hardware products. Fortunately, if you’re Linux-savvy, a free load-balancing option is available. If not, that’s alright, help is on the way.

You can use Windows Network Load Balancing to load balance Exchange, but several limitations make it impractical for certain Exchange deployments. For example, you can’t add more than eight nodes in a Network Load Balancing cluster. You also can’t combineWindows Failover Clustering and Network Load Balancing because they can’t interact with each other.

In cases like these, you need external assistance. Help usually comes in the form of hardware-based load balancers. Unfortunately, those products aren’t cheap. Prices typically start around $1,500 for low-end models and quickly soar into the tens of thousands of dollars.

Most companies don’t have to spend that kind of money though. You can use a free virtual-software appliance that acts as a load balancer. This appliance can be installed on a repurposed server or even in a virtual machine (VM) on shared hardware. All you’re really “spending” is the time and effort to get it up and running.

Your free load-balancing options for Exchange 2010
One such appliance is HAProxya Linux-based Layer 4 load balancer for TCP and HTTP applications. There are already a number of third-party products like redWall’s Firewall and Exceliance’s HAPEE distribution that use the tool, as well as many satisfied users — the Fedora Project, Reddit, StackOverflow and many more.

You must be comfortable with Linux to use HAProxy in your Exchange 2010 production environment. If not, Microsoft-certified systems administrator Steve Goodman created the Exchange 2010 HAProxy Virtual Load Balancer.

The appliance is a pre-packaged version of HAProxy, built on Ubuntu Linux, that can be deployed on VMware vSphere orMicrosoft Hyper-V with minimal work required by an Exchange administrator.

All you need is a solid understanding of your network topology and some familiarity with either VMware or Hyper-V. While you don’t need to fully understand Linux to install Goodman’s appliance, it does help to know about the OS if you want to fine-tune aspects of the tool that aren’t available through the Web interface. That said, you can get the HAProxy Virtual Load Balancer up and running in your Exchange 2010 lab environment without being a Linux expert.

The appliance comes in two formats: a VMware vSphere .ovf file and a Hyper-V-compatible .vhd file. The tool’s website contains step-by-step instructions on how to set up HAProxy on either vSphere or Hyper-V.

Setting up the Exchange 2010 HAProxy Virtual Load Balancer
Boot the appliance and you’re greeted with a simple console login screen. To begin, type inroot as your username and setup as your password. You will be prompted to choose a new password. This secures the setup process; you can change the password later on.

Next comes the most important part of the setup. You must set the IP address, netmask and default gateway for HAProxy. If you mistype anything, press Ctrl+C to get out of the script, type logout to leave, then log back in. Remember to use your new password, then repeat the login process. After you complete the first step, you will be given a URL; make sure to write it down. You will be prompted to log back in when HAProxy reboots.

The rest of the setup process — as well as most HAProxy management — is done through HAProxy’s Web interface. Configure the static RPC ports for your client access servers, then list the IP addresses of each of the client access servers you want to balance. You must also set the time zone and the network time protocol (NTP) servers. Don’t touch the console login screen unless there’s an overwhelming reason to do so.

While the HAProxy Virtual Load Balancer has been through plenty of development, the virtual appliance is still a work in progress. For example, HAProxy is a Layer 4 (TCP) balancer, not a Layer 7 (application-level) balancer. It is not completely “Exchange-aware,” so it can’t do things like application-level monitoring or SSL offloading — at least, not yet.

These items may eventually be added, and it sounds like Goodman plans to further improve the tool. ”Subsequent versions will be production ready, as this is totally aimed at being an easy-to-use free alternative to paid-for hardware and virtual load balancers for Exchange 2010,” Goodman said.

 

Posted in TUTORIALS | Tagged: , , , , , , , , , , , | Leave a Comment »

How BranchCache feature in Windows 2008 R2 could speed your Windows 7 migration plans

Posted by Alin D on August 17, 2011

Searching for a singular business reason to accelerate your upgrade to Windows 7? Look no further than with Windows Server 2008 R2’s new BranchCache feature.

BranchCache creates an automated infrastructure for caching documents right within your individual branch offices. Once a remote document is accessed by a branch office desktop, that document is then cached in the remote location. Any future request for the document is automatically referred to its new second home instead of its far-away remote source.

The primary reason for local caching is speed. Caching documents to a local storage area dramatically reduces their load time. If a needed document is locally available, a requesting client will automatically load that document from the local cache instead of going over the wire. Since the document doesn’t need to traverse the WAN, the net result for users is a dramatic improvement in performance with no extra outlay in network hardware.

Particularly powerful in Microsoft’s implementation is the level of automation available right out of the box. The entire BranchCache infrastructure is designed to be a “set it and forget it” implementation. Once turned on, clients are automatically redirected to local copies of requested documents with no further involvement by administrators or change to user behaviors. This means that BranchCache runs virtually invisible; quietly redirecting users to close-in copies while preserving precious WAN bandwidth.

Today, BranchCache is only available with the combination of Windows 7 and Windows Server 2008 R2, making it a strong OS upgrade justification for distributed businesses who suffer from slow network links.

How does it work?

If your network is comprised of a single location, or multiple locations with exceptionally fast connections between, you should stop reading now. BranchCache isn’t meant for you. For the rest of us, we likely support a high-speed LAN in the main office, but comparatively slow connections out to our remote locations.

Think for a minute about this kind of network. Employees who work in the main office can use documents quickly and efficiently because they’re on the local LAN, but other users in remote offices don’t enjoy the same performance. Working with a Word document or Excel spreadsheet on a remote file server can be exceptionally painful. Often, connections are so slow that users are forced to download the document, update it, and upload it when complete — a multi-step process that can take several minutes per document. Users who work this way are not efficient, and generally quite unhappy.

BranchCache solves this problem by automatically caching a document once it is accessed. This means that while the first attempt to access a document still requires WAN traversal, subsequent accesses can occur from a speedy local cache.

Here’s how it works.

Let’s assume that a remote office client needs to access a document on a file server in the main office. The client issues a request for the file to a BranchCache-enabled file server. That server responds first by returning a tiny set of identifying data that describes the “chunks of content” that the client wants. The client then uses these clues to search its local network for a computer that has already downloaded the content.

It is here where Microsoft’s BranchCache implementation really shines. BranchCache smartly allows for two different ways to locally cache that desired content. The first, calledHosted Cache mode, uses a specially-identified server that runs the BranchCache feature and is housed in each remote office. This server becomes the central storage location where clients can look to find any documents that have been cached locally.

But some environments can’t afford to buy a separate server for each remote office. Others may have remote offices that are so small that local servers don’t make sense. In either of these cases, BranchCache can alternately be configured into Distributed Cache mode. In this mode, each individual Windows 7 computer in the remote office is configured to host its own mini-cache. Designed for small remote offices with less than 50 computers, this Distributed Cache mode securely makes your desktops do the work without the cost of an extra server.

There are obvious benefits and gotchas associated with both solutions. While Distributed Cache mode doesn’t require an extra server, it does require a small bit of extra processing power on each computer as well as extra disk space to store each computer’s mini-cache. Requests for locally-cached documents in Distributed Cache also require the WS-Discovery protocol, which is a multicast protocol sent over UDP that effectively limits each caching boundary to an individual subnet. It is because of these extra needs that Distributed Mode is generally limited to very small branch offices with few users.

Hosted Cache mode obviously requires the purchase of an additional server and Windows Server 2008 R2 license for each branch office (or the enabling of the service on an existing server). But doing so consolidates all cached document copies into one place. Further, Hosted Mode enables clients to directly contact that server rather than multicast around the network. Because directed connections are used with Hosted Cache mode, there are no subnet limitations.

In either case, the BranchCache feature must be installed onto any file servers that will participate. Participating file servers must run Windows Server 2008 R2 and have the File Services role installed with the BranchCache for Network Files role service enabled.

By default, individual file shares must be tagged for BranchCache support as well. This is done within the Share and Storage Management console’s Caching tab. There, select Enable BranchCache to configure the share for caching. Clients can be configured individually using the netsh command, or via Group Policy.

Secure and pervasive

BranchCache works with HTTP documents as well as traditional documents accessed through SMB. Particularly useful is its position below both the HTTP and SMB protocols in the Windows stack.

By operating at a level below both of these protocols, any tool that leverages the Windows stack for SMB or HTTP traffic will automatically and invisibly leverage BranchCache. This means that common applications like Robocopy, Windows Media Player, Internet Explorer, Flash, and Silverlight will all automatically make use of local copies if they are available. Neither you the administrator nor your users need to change behaviors in any way to make use of this infrastructure.

Security is also a concern with this service’s potential for distributing documents all around your network. Built into the BranchCache service are security measures for protecting data both while it sits in cache locations and when it crosses the network.

First, BranchCache is designed to respect existing NTFS permissions on documents. Clients that attempt to access a document must be authenticated and authorized by the remote content server before any further steps are taken. Data sent from a cache storage location to a requesting computer is encrypted using AES 128. While stored in the clear by default, the cache storage location itself can be further protected by implementing BitLocker or EFS on each computer’s cache file.

Two well-written documents are available that can help usher you into a new era of high-performance distributed file sharing. Both are available for download from Microsoft’s website. The first is titled BranchCache Technical Overview. The second document contains more detailed installation information and is dubbed BranchCache Early Adopter’s Guide.

Since BranchCache requires Windows Server 2008 R2 on any file servers and hosted cache servers, adding this service will require a server upgrade, and once again, clients must be running Windows 7 to participate. Yet this “set it and forget it” mechanism to squeeze more performance out of existing WAN lines is one feature that absolutely compels an upgrade.

If you’re currently suffering from poor performance at your remote offices, consider BranchCache as your no-added-cost solution for improving user satisfaction.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Powershell Remote control: one to one,and one to many

Posted by Alin D on July 26, 2011

When I first started using PowerShell (in version 1), I was playing around with the Get-Service command, and noticed that it had a -computerName parameter. Hmmm … does that mean it can get services from other computers, too? After a bit of experimenting, I discovered that’s exactly what it did. I got very excited and started looking for -computerName parameters on other cmdlets, and was disappointed to find that there were very few. A few more were added in v2, but the commands that have this parameter are vastly outnumbered by the commands that

don’t.

What I’ve realized since is that PowerShell’s creators are a bit lazy—and that’s a good thing! They didn’t want to have to code a -computerName parameter for every single cmdlet, so they created a shell-wide system called remoting. Basically, it enables any cmdlet to be run on a remote computer. In fact, you can even run commands that exist on the remote computer but that don’t exist on your own computer— meaning that you don’t always have to install every single administrative cmdlet on your workstation. This remoting system is powerful, and it offers a number of interesting administrative capabilities.

WinRM overview

Let’s talk a bit about WinRM, because you’re going to have to configure it in order to start using remoting. Once again, you only need to configure WinRM—and PowerShell remoting—on those computers that will receive incoming commands. In most of the environments I’ve worked in, the administrators have enabled remoting on every Windows-based computer (keep in mind that PowerShell and remoting are supported all the way back to Windows XP). Doing so gives you the ability to remote into client desktop and laptop computers in the background (meaning the users of those computers won’t know you’re doing so), which can be tremendously useful. WinRM isn’t unique to PowerShell. In fact, it’s likely that Microsoft will start using it for more and more administrative communications—even things that use other protocols today. With that in mind, Microsoft made WinRM able to route traffic to multiple administrative applications—not just PowerShell. WinRM essentially acts as a dispatcher: when traffic comes in, WinRM decides which application needs to deal with that traffic. All WinRM traffic is tagged with the name of a recipient application, and those applications must register with WinRM to listen for incoming traffic on their behalf. In other words, you’ll not only need to enable WinRM, but you’ll also need to tell PowerShell to register as an endpoint with WinRM.

One way to do that is to open a copy of PowerShell—making sure that you’re running it as an Administrator—and run the Enable-PSRemoting cmdlet. You might sometimes see references to a different cmdlet, called Set-WSManQuickConfig. There’s no need to run that one; Enable-PSRemoting will call it for you, and Enable-PSRemoting does a few extra steps that are necessary to get remoting up and running. All told, the cmdlet will start the WinRM service, configure it to start automatically, register PowerShell as an endpoint, and even set up a Windows Firewall exception to permit incoming WinRM traffic.

If you’re not excited about having to run around to every computer to enable remoting, don’t worry: you can also do it with a Group Policy object (GPO), too. The necessaryGPO settings are built into Windows Server 2008 R2 domain controllers (and you can download an ADM template from download.Microsoft.com to add these GPO settings to an older domain’s domain controllers). Just open a Group Policy object and look under the Computer Configuration, then under Administrative Templates, then under Windows Components. Near the bottom of the list, you’ll find both Remote Shell and Windows Remote Management. For now, I’m going to assume that you’ll run Enable-PSRemoting on those computers that you want to configure, because at this point you’re probably just playing around with a virtual machine or two.

WinRM v2 (which is what PowerShell uses) defaults to using TCP port 5985 for HTTP and 5986 for HTTPS. Those ports help to ensure it won’t conflict with any locally installed web servers, which tend to listen to 80 and 443 instead. You can configure WinRM to use alternative ports, but I don’t recommend doing so. If you leave those ports alone, all of PowerShell’s remoting commands will run normally. If you change the ports, you’ll have to always specify an alternative port when you run a remoting command, which just means more typing for you.

If you absolutely must change the port, you can do so by running this command:

Winrm set winrm/config/listener?Address=*+Transport=HTTP@{Port=”1234″}

In this example, “1234” is the port you want. Modify the command to use HTTPS instead of HTTP to set the new HTTPS port

I should admit that there is a way to configure WinRM on client computers to use alternative default ports, so that you’re not constantly having to specify an alternative port when you run commands. But for now let’s stick with the defaults Microsoft came up with.

How to Use Enter-PSSession and Exit-PSSession for one to one remoting

PowerShell uses remoting in two distinct ways. The first is called one-to-one, or 1:1, remoting (the second way is one-to-many remoting, and you’ll see it in the next section). With this kind of remoting, you’re basically accessing a shell prompt on a single remote computer. Any commands you run will run directly on that computer, and you’ll see results in the shell window. This is vaguely similar to using Remote Desktop Connection, except that you’re limited to the command-line environment of Windows PowerShell. Oh, and this kind of remoting uses a fraction of the resources that Remote Desktop requires, so it imposes much less overhead on your servers!

To establish a one-to-one connection with a remote computer, run this command:

Enter-PSSession -computerName Server-R2

Of course, you’ll need to provide the correct computer name instead of Server-R2. Assuming you enabled remoting on that computer, that you’re all in the same domain, and that your network is functioning correctly, you should get a connection going. PowerShell lets you know that you’ve succeeded by changing the shell prompt:

[server-r2] PS C:>

That prompt tells you that everything you’re doing is taking place on Server-R2 (or whatever server you connected to). You can run whatever commands you like. You can even import any modules, or add any PSSnapins, that happen to reside on that remote computer.

Even your permissions and privileges carry over across the remote connection. Your copy of PowerShell will pass along whatever security token it’s running under (it does this with Kerberos, so it doesn’t pass your username or password across the network).

Any command you run on the remote computer will run under your credentials, so you’ll be able to do anything you’d normally have permission to do. It’s just like logging directly into that computer’s console and using its copy of PowerShell directly.

Well, almost. There are a couple of differences:

  • Even if you have a PowerShell profile script on the remote computer, it won’t run when you connect using remoting. We haven’t fully covered profile scripts, but suffice to say that they’re a batch of commands that run automatically each time you open the shell. Folks use them to automatically load shell extensions and modules and so forth. That doesn’t happen when you remote into a computer, so be aware of that.
  • You’re still restricted by the remote computer’s execution policy. Let’s say your local computer’s policy is set to RemoteSigned, so that you can run local, unsigned scripts. That’s great, but if the remote computer’s policy is set to the default, Restricted, it won’t be running any scripts for you when you’re remoting into it.

Aside from those two fairly minor caveats, you should be good to go. Oh, wait—whatdo you do when you’re done running commands on the remote computer? Many PowerShell cmdlets come in pairs, with one cmdlet doing something and the other doing the opposite. In this case, if Enter-PSSession gets you into the remote computer, can you guess what would get you out of the remote computer? If you guessed Exit-PSSession, give yourself a prize. The command doesn’t need any parameters; just run it and your shell prompt will change back to normal, and the remote connection will close automatically.

What if you forget to run Exit-PSSession and instead close the PowerShell window? Don’t worry. PowerShell and WinRM are smart enough to figure out what you did, and the remote connection will close all by itself.

I do have one caution to offer. When you’re remoting into a computer, don’t run Enter-PSSession from that computer unless you fully understand what you’re doing.

Let’s say you work on Computer A, which runs Windows 7. You remote into Server-R2. Then, at the PowerShell prompt, you run this:

[server-r2] PS C:>enter-pssession server-dc4

Now, Server-R2 is maintaining an open connection to Server-DC4. That can start to create a “remoting chain” that’s hard to keep track of, and which imposes unnecessary overhead on your servers. There are times when you might have to do this—I’m thinking mainly of instances where a computer like Server-DC4 sits behind a firewall and you can’t access it directly, so you use Server-R2 as a middleman to hop over to Server-DC4. But, as a general rule, try to avoid remote chaining.

When you’re using this one-to-one remoting, you don’t need to worry about objects being serialized and deserialized. As far as you’re concerned, you’re typing directly on the remote computer’s console. If you retrieve a process and pipe it to Stop-Process, it’ll stop as you would expect it to.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , | Leave a Comment »

How to optimize WAN bandwidth by using Windows 7 BrachCashe

Posted by Alin D on June 29, 2011

BranchCache is a new technology in Windows 7 and Windows Server 2008 R2 designed to optimize network bandwidth over slow wide area network links. To reduce WAN use, BranchCache copies documents from the main office to secure repositories on the remote network. As a result, when users at the remote office access files from the home office, the files are served up from the remote network’s cache rather than from the home office across the WAN link.

In the past, users at remote sites frequently clogged their WAN links when accessing large files stored on file servers at the home office. A 5 MB PowerPoint presentation on the shared drive at the home office can become 100 MB of network traffic as 20 people at the remote office each try to view it. With BranchCache, the file is downloaded to the remote office and stored in a local “cache” the first time it’s accessed. Subsequent requests for the same file are served up from that local cache, reducing the network traffic to the home office.

BranchCache is seamless for the end user. A user would launch the file from the home office as usual. The request for the file is sent to the home office file server, where the BranchCache service takes over. If that file has not been previously sent to the remote office, it’s copied and stored in a local cache, but if it has been sent BranchCache redirects the remote office computer to download the file from the existing cache on the remote network. All cached files are automatically encrypted to prevent unauthorized access. (Content is decrypted and delivered to the end user after the New Technology File System’s access control lists have verified that they are allowed to see the data.)

To maintain integrity and ensure users are working from the latest documents, t BranchCache maintains a list of the files that were sent to each remote cache. When a request for a previously cached file is received, the service compares a cryptographic hash of the current file on the server with a hash of the file that was sent to the remote cache. If the hashes don’t match, the document was modified after it was cached. As a result, a new version of the document is sent across the WAN to the remote location’s cache.

The cache location at the remote office can be configured in distributed mode or hosted mode.

The distributed mode is the simplest to set up and configure because it doesn’t require any special servers or software at the remote site. In distributed mode, documents are stored on individual Windows 7 computers at the remote office. The Windows 7 computer that downloads the document first becomes the cache for that document. Other Windows 7 machines that request that document will be referred to the Windows 7 system hosting the cached document. If that computer isn’t online, the new computer will download the file and will become the cache for that document.

Since BranchCache is installed on Windows 7 clients by default, to turn on distributed-mode simply enable the service through Group Policy and select four predefined firewall settings for inbound and outbound discovery and communication. Group Policy settings can also be used to specify the percentage of disk space allowed for the cache as well as the network latency time that defines a remote connection. (By default, connection requests with greater than 80 millisecond latency are considered remote requests and automatically trigger BranchCache functionality, if enabled.)

 Distributed mode uses the WS-Discovery protocol to identify local cache locations

In hosted mode, a Windows Server 2008 R2 system must be present in each remote office location. The specified server is the central cache repository for all documents obtained from the main office. This mode provides higher availability for the cached documents since it’s more likely to be “always on” than a Windows 7 computer in distributed mode. The hosted-mode BranchCache service can live side by side with other applications on a Windows Server 2008 R2 system.

BranchCache functionality helps reduce network traffic over slow WAN links and is intended to increase remote user satisfaction. However, the benefits of BranchCache are available only to Windows 7 Ultimate and Enterprise clients when accessing Server Message Block or HTTP content stored on Windows Server 2008 R2 systems. Perhaps it’s time to upgrade?

Posted in Windows 2008, Windows 7 | Tagged: , , , , , , | Leave a Comment »

How Built-in security tools help securing Exchange Server 2010

Posted by Alin D on June 27, 2011

Securing Exchange Server 2010 is a process that involves making key decisions that can negatively affect access to features and ease of use. Exchange administrators are often seen as the bad guy for doing the right things — security-wise — for the organization.

There’s good news for administrators; Exchange Server 2010 is more secure and feature-rich out-of-the-box than its predecessors. This tip introduces some major security vulnerabilities of Exchange 2010 and how the server can protect itself against them.

Transport security in Exchange 2010

The SMTP protocol is the first challenge to Exchange 2010 security. SMTP is vulnerable to snooping since SMTP is an extremely open protocol that sends information in clear text.

Self-signed certificates are used in Exchange 2010 to enhance SMTP security. By default, the hub transport role leverages self-signed certificates to encrypt communications between transport servers within the organization, with no additional administrative intervention.

External-facing transport servers use opportunistic transport layer security (TLS) when connecting to remote SMTP hosts. This allows them to send encrypted communications outbound if the remote server has a trusted certificate. Administrators can also enable domain security for partner SMTP domains for mutual TLS encryption (Figure 2).

Organizations that require encryption for regulatory compliance purposes like HIPAA can benefit from this feature. When an edge transport server role is introduced, an additional layer of security becomes available to the organization when external connectivity is limited to the perimeter network.

Protecting Exchange 2010 users from spoofing

Another potential vulnerability is the fact that messages sent with SMTP can be easily manipulated. One way that a message can be manipulated that could have disastrous consequences is spoofing.

Spoofing is the process of pretending to be someone you are not in an email message. In many cases, the recipient of a spoofed message does not identify that the message didn’t originate from the actual sender. A great example of this is when you receive an email from yourself advertising the latest get-rich-quick scheme.

Secure Multipurpose Internet Mail Extension (S/MIME) certificates can protect Exchange organizations from email address spoofing. S/MIME certificates let users digitally sign a message that they are sending. If the recipient trusts the certificate authority that issued the certificate, he can verify that the person in the From: field is, in fact, the same person who sent the message. Outlook 2010 and Outlook Web App (OWA) clients both support S/MIME

You can also use S/MIME certificates to encrypt the actual messages that users send to one another. S/MIME requires an organization to invest in a public key infrastructure (PKI) solution. Leveraging Active Directory Group Policy and Exchange Server 2010 can facilitate S/MIME deployment.

Exchange Server 2010 and Active Directory integration

The latest addition to the message security feature set is the tight integration between Exchange Server 2010 and Active Directory Rights Management Services (AD RMS). If an organization introduces AD RMS, administrators can use it to enforce compliance for security policies.

An example of this might include encrypting email messages as they’re sent between certain individuals in an organization. AD RMS is better than S/MIME because it enables Exchange 2010 transport servers to decrypt messages, scan for viruses that may be attached and then securely deliver the messages to the target recipients.

Role Based Access Control (RBAC) in Exchange 2010

Much like transport servers, Client access server (CAS) roles are often Internet-facing. There are a variety of Internet protocols that can be used to access users’ mailboxes, which makes the CAS role vulnerable. Protocols such as HTTP (OWA/ActiveSync/Outlook Anywhere), IMAP4 and POP3 each have potential vulnerabilities. However, similar to SMTP, they can be protected with certificates.

The CAS role cannot use self-signed certificates without making users aware that they’re not from a trusted certificate authority . Additional certificates from trusted CAs must be obtained for the CAS servers.

Exchange Server 2010 includes a new Certificate Wizard (Figure 7) in the EMC that simplifies the request-and-import process. With an increasing dependence on certificates for security, it’s nice to have assistance to ensure that all potential vulnerabilities are being accounted for as new certificate requests are generated. It’s also helpful that you don’t need to use the Exchange Management Shell.

Protecting Exchange Server 2010 with ForeFront

Just as an edge transport server creates an additional layer of protection for SMTP communication, there needs to be an equivalent protection for HTTP, IMAP4 and POP3 connections. To provide protection, an application layer firewall becomes a reverse proxy for the applications that use these protocols.</ p>

Microsoft strongly recommends using an application firewall and provides two related products, both of which are next-generation ISA servers that can protect the CAS roles in the perimeter network.

Forefront Protection for Exchange 2010 (FPE) is the latest Microsoft technology you can use to protect your organization from spam, viruses, phishing and other security issues. You can deploy FPE on the edge, hub and mailbox server roles on-premise; you can also use Forefront Online Protection for Exchange 2010 (FOPE) in the cloud, in addition to on-premise protection.

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

How to Setup RPC over HTTP for Microsoft Outlook

Posted by Alin D on June 27, 2011

The most of the work for setting up RPC over HTTP actually has to be done on the server side. On the client side, you’ll need to ensure that your servers have Microsoft Windows XP. If you’re using Service Pack 1, you’ll need the Q331320 hotfix, which is included with Service Pack 2 and later. You’ll also need to have Exchange Server 2003 running on Windows Server 2003 for the front-end and back-end servers your users communicate with, and all global catalogs and domain controllers that your servers and clients talk to must also be running Windows Server 2003.

Note: The Office Resource Kit contains a wealth of material on deploying RPC over HTTP using the Custom Installation Wizard, which makes it possible to seamlessly enable it for some or all of your users at the time you deploy it.

The settings for RPC over HTTP are associated with individual profiles and can only be applied to a single Exchange server account in each profile. You modify these settings using the same interface you’re probably familiar with, but the settings themselves are different. (Remember, you must already have set up your Exchange servers and global catalogs as described in Chapter 11.)

The key to getting RPC over HTTP set up for Outlook is found in a single simple check box, Connect To My Exchange Mailbox Using HTTP, shown in Figure 13-4. (You get to this check box by editing an account with the Tools | Email Accounts command, clicking Change, clicking More Settings, and clicking the Connection tab.) This check box is visible when you’re running Outlook 2003 on a system that meets the prerequisites and talking to an Exchange server that meets its prerequisite requirements. If any component is missing or misconfigured, the check box won’t appear.

After you select the check box, of course, the real fun begins. The Exchange Proxy Settings button controls the appearance of the Exchange Proxy Settings dialog box (see Figure 13-5). You can specify the URL for your Exchange server (which, for a standard Exchange Server 2003 installation, will be the same as the name of the front-end server) and whether you want to require the use of SSL. For maximum security, you should ensure that the Connect Using SSL Only and Mutually Authenticate The Session When Connecting With SSL check boxes are both selected; this combination provides the best protection against spoofing and eavesdropping. The other settings are pretty much irrelevant from a security standpoint, with the exception of the Use This Authentication When Connecting To My Proxy Server For Exchange control.

There are two other useful things to know about Outlook 2003 RPC over HTTPS support. The first is that you can disable the user interface controls that let users change RPC over HTTPS behavior. This is useful if you want to ensure that your users don’t set it up on their own, or if you want to prevent them from changing settings once you’ve deployed them. To do this, add the EnableRPCTunnelingUI value (a REG_DWORD) to HKEY_CURRENT_USERSoftwarePoliciesMicrosoftOffice11.0OutlookRPC. When this value is set to 0, the user interface (UI) controls are hidden; when it’s set to 1, or not present, the UI controls are visible as long as Outlook is running on a machine that meets the operating system requirements.

The other useful thing to know is that you can turn on RPC over HTTPS at a later date, after your initial Outlook 2003 deployment. To do this, you should use the Office Resource Kit’s Custom Maintenance Wizard, which lets you make some types of configuration changes and deploy them as files that can automatically update installed Office configurations.

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

Best practice for a good Microsoft IIS 7 Security

Posted by Alin D on June 21, 2011

Microsoft’s Internet Information Services (IIS) Web server has presented enterprises with more than its share of security problems over the years, including the infamous Code Red worm nearly a decade ago. A key security concern with IIS has always been the number of features that are automatically installed and enabled by default, such as scripting and virtual directories, many of which proved vulnerable to exploit and led to major security incidents.

With the release of IIS 6 a few years ago, a “lockdown by default” approach was introduced with several features either not being installed or installed but disabled by default. IIS 7, the newest iteration, goes even further. It’s not even installed on Windows Server 2008 by default, and when it is installed, the Web server is configured to serve only static content with anonymous authentication and local administration, resulting in the simplest of Web servers and the smallest attack surface possible to would-be hackers.

This is possible because IIS 7 is completely modularized. Let’s briefly dig into why that is and how it enables a more secure product. Essentially administrators can select from more than 40 separate feature modules to completely customize their installation. By only installing the feature modules required for a particular website, administrators can greatly reduce the potential attack surface and minimize resource utilization.

Be aware, however, this is true only with a clean install. If you are upgrading your Windows OS and running an earlier version of IIS, all the metabase and IIS state information is gathered and persevered. Consequently, many unnecessary Web server features can be installed during an upgrade. Therefore, it is good practice for an organization to revisit

its application dependencies on IIS functionality after an upgrade and uninstall of any unneeded IIS modules.

Fewer components also means there are fewer settings to manage and problems to patch as it’s only necessary to maintain the subset of modules that are actually being used. This reduces downtime and improves reliability. Also, the IIS Management Console, with all its confusing tabs, has been replaced with a far more intuitive GUI tool, which makes it easier to visualize and understand how security settings are implemented. For example, if the component supporting basic authentication is not installed on your system, the configuration setting for it doesn’t appear and confuse matters.

So what components are likely to be needed to run a secure IIS? The first six listed below will be required by any website running more than just static pages, while seven and eight will be necessary for anyone needing to encrypt data between the server and client, while shared configuration is useful when you have a Web farm and want each Web server in the farm to use the same configuration files and encryption keys:

  1. Authentication includes integrated Windows authentication, client certificate authentication and ASP.NET forms-based authentication, which lets you manage client registration and authentication at the application level, instead of relying on Windows accounts. 
  2. URL Authorization, which integrates nicely with ASP.NET Membership and Role Management, grants or denies access to URLs within your application based on user names and roles so you can prevent users who are not members of a specific group from accessing restricted content. 
  3. IPv4 Address and Domain Name Rules provide content access based on IP Address and Domain Name. The new property “allowUnlisted” makes it a lot easier to prevent access to all IP addresses unless they are listed. 
  4. CGI and ISAPI restrictions allow you to enable and disable dynamic content in the form of CGI files (.exe) and ISAPI extensions (.dll). 
  5. Request filters incorporate the functionality of the UrlScan tool restricting the types of HTTP requests that IIS 7 will process by rejecting requests containing suspicious data. Like Apache’s mod_rewrite, it can use regular expressions to block attacks or modify requests based on verb, file extension, size, namespace and sequences. 
  6. Logging now provides real-time state information about application pools, processes, sites, application domains and running requests as well as the ability to track a request throughout the complete request-and-response process. 
  7. Server Certificates 
  8. Secure Sockets Layer 
  9. Shared Configuration

Other features that enhance the overall security of IIS 7 are new built-in user and group accounts dedicated to the Web server. This enables a common security identifier (SID) to be used across machines, which simplifies access control list management, and application pool sandboxing. Server administrators meanwhile have complete control over what settings are configurable by application administrators, while allowing them to make any configuration changes directly in their application without having administrative access to the server.

IIS 7 is quite a different beast as compared with previous incarnations, and that’s a good thing. It has been designed and built along classic security principles and it gives Windows-based organizations a Web server that can be more securely configured and managed than ever before. There may still not be enough from a security perspective to sway Linux and Apache shops to change to IIS anytime soon, but Microsoft has definitely narrowed the security gap between them. It will take administrators a while to get use to the new modular format and administrative tools and tasks. The training and testing time will be worth it though as it is an OS and framework that administrators are familiar with.

 

 

 

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Best practices in working with SQL Server Analysis Services aggregations

Posted by Alin D on May 12, 2011

Retrieving data from a cache is the fastest way for SQL Server Analysis Services (SSAS) to resolve a query.

However, in order for the cache to have the necessary data, you need to either run all anticipated queries before your users do or you need to use the CREATE CACHE statement. Since it is difficult to predict every possible query (unless you have a very trivial cube with a handful of dimensions), it is more practical to experiment with CREATE CACHE. (Keep in mind, if you have pre-defined reports that run against the cube, it may be useful to warm up the cache on a set schedule before executing the reporting queries.)

Unfortunately, if you allow any flexibility in your analytical application, your queries may not be resolved by the cache. In this case, SSAS has two options: it can retrieve data from aggregation files or it can retrieve data from data files. Aggregations are pre-calculated summary values for a given set of SSAS measure group attributes. For example, an aggregation could contain reseller sales amounts for August 2009, volume discount promotions and a clothing product category.

Since aggregations files are often smaller than data files, it is faster to retrieve data from them.

Aggregation designs

A measure group can have zero or more aggregation designs, and each aggregation design can have one or more aggregations. You can also apply aggregation design to zero or more partitions within your measure group. Furthermore, each aggregation contains various combinations of dimension attributes that summary values should be calculated by.

Aggregation design doesn’t speed up queries – it is simply the metadata used by Analysis Services when it builds aggregation files. You can review existing aggregation designs using the Business Intelligence Development Studio (BIDS), SQL Server Management Studio (SSMS) or Aggregation Manager sample tool.

In SSAS 2008, BIDS includes a new Aggregations tab, which is pictured below.

The advanced view of the Aggregations tab allows you to examine attributes included in each aggregation and to add or remove attributes.

In the following advanced view, the first 30 aggregations for Internet Sales aggregation design are shown.

Aggregations are assigned counters starting with 0. SSMS 2008 allows you to script aggregation designs in XMLA format.

Here is an abbreviated XMLA for the Internet Sales aggregation design:

<Create xmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
<ParentObject>
<DatabaseID>Adventure Works DW 2008</DatabaseID>
<CubeID>Adventure Works</CubeID>
<MeasureGroupID>
Fact Internet Sales 1</MeasureGroupID>
</ParentObject>
<ObjectDefinition>
<AggregationDesign
xmlns:xsd=http://www.w3.org/2001/XMLSchemaxmlns:xsi=http://www.w3.org/2001/XMLSchema-instancexmlns:ddl2=http://schemas.microsoft.com/analysisservices/2003/engine/2xmlns:ddl2_2=http://schemas.microsoft.com/analysisservices/2003/engine/2/2xmlns:ddl100_100=http://schemas.microsoft.com/analysisservices/2008/engine/100/100>
<ID>Internet Sales 1</ID>
<Name>Internet Sales</Name>
<EstimatedRows>32265>/EstimatedRows>
<Dimensions>
<Dimension>
<CubeDimensionID>Dim Promotion>/CubeDimensionID>
<Attributes>
<Attribute>
<AttributeID>Promotion Name</AttributeID>
<EstimatedCount>16</EstimatedCount>
</Attribute>
<Attribute>
<AttributeID>Discount Pct></AttributeID>
<EstimatedCount10>/EstimatedCount>
</Attribute>
<Attribute>
<AttributeID>Max Qty</AttributeID>
<EstimatedCount>4</EstimatedCount>
</Attribute>
<Attribute>
<AttributeID>Promotion Type>/AttributeID>
<EstimatedCount>6</EstimatedCount>
</Attribute>
<Attribute>
<AttributeID>Min Qty</AttributeID>
<EstimatedCount>6</EstimatedCount>
/Attribute>
<Attribute>
<AttributeID>Promotion Category</AttributeID>
<EstimatedCount>3</EstimatedCount>
</Attribute>
<Attribute>

<AttributeID>End Date</AttributeID>
<EstimatedCount>10</EstimatedCount>
</Attribute>
<Attribute>
<AttributeID>Start Date</AttributeID>
<EstimatedCount>8</EstimatedCount>
</Attribute>
</Attributes>
</Dimension>

</Dimensions>
<Aggregations>
<Aggregation>
<ID>Aggregation 0</ID>
<Name>Aggregation 0
</Name>

<Dimension>
<CubeDimensionID>Dim Promotion</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Dim Sales Territory</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Internet Sales Order Details</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Dim Product</CubeDimensionID>
<Attributes>
<Attribute>
<AttributeID>Model Name</AttributeID>
</Attribute>
</Attributes>
</Dimension>
<Dimension>
<CubeDimensionID>Dim Customer</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Dim Currency</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Destination Currency</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Order Date Key – Dim Time</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Ship Date Key – Dim Time</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Due Date Key – Dim Time</CubeDimensionID>
</Dimension>
<Dimension>
<CubeDimensionID>Sales Reason</CubeDimensionID>
</Dimension>
</Dimensions>
</Aggregation>>

</Aggregations>
</AggregationDesign>
</ObjectDefinition>
</Create>

Notice that the aggregation design includes dimensions that the aggregations can be defined by. It also includes the list of attributes that will be pre-calculated by each aggregation.

In Aggregation Manager, pictured below, you can review aggregations and summarized attributes.

Assigning aggregation designs

You can have multiple aggregation designs for each measure group.

This is useful if you have a set of reports or expected queries that are slated to be executed at different times. In addition, partitions that are seldom used (historical partitions) could each be assigned a different aggregation design to save disk space. For small partitions that only take a few seconds to scan, you do not need to assign any aggregations since there are no performance benefits.

In BIDS, you can assign aggregation design to a partition by creating a new design for a specific part.

Alternatively, you can assign an existing aggregation design to a partition through SSMS by right-clicking a partition, choosing Assign Aggregation Design, selecting the aggregation design from a drop-down list and checking the partition to which you wish to apply this design.

In the background, Analysis Services executes an ALTER statement on a partition, similar to the command shown below.

<Alter ObjectExpansion=ObjectPropertiesxmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
<Object>
<DatabaseID>Adventure Works DW 2008</DatabaseID>
<CubeID>Adventure Works</CubeID>
<MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
<PartitionID>Internet_Sales_2001</PartitionID>
</Object>
<ObjectDefinition>
<Partition
xmlns:xsd=http://www.w3.org/2001/XMLSchemaxmlns:xsi=http://www.w3.org/2001/XMLSchema-instancexmlns:ddl2=http://schemas.microsoft.com/analysisservices/2003/engine/2xmlns:ddl2_2=http://schemas.microsoft.com/analysisservices/2003/engine/2/2xmlns:ddl100_100=http://schemas.microsoft.com/analysisservices/2008/engine/100/100>
<ID>Internet_Sales_2001</ID>
<Name>Internet_Sales_2001</Name>
<Source
xsi:type=QueryBinding>
<DataSourceID>Adventure Works DW</DataSourceID>
<QueryDefinition>
SELECT * FROM [dbo].[FactInternetSales] WITH       WHERE OrderDateKey &lt;= ‘20011231’</QueryDefinition>
</Source>
<StorageMode>Molap</StorageMode>
<ProcessingMode>Regular</ProcessingMode>
<Slice>([Date].[Calendar].[Calendar Year].&amp;[2001], [Product].[Product Categories].[Category].&amp;[1])</Slice>
<ProactiveCaching>
<SilenceInterval>-PT1S</SilenceInterval>
<Latency>-PT1S</Latency>
<SilenceOverrideInterval>-PT1S</SilenceOverrideInterval>
<ForceRebuildInterval>-PT1S</ForceRebuildInterval>
<AggregationStorage>MolapOnly</AggregationStorage>
<Source
xsi:type=“ProactiveCachingInheritedBinding”>
<NotificationTechnique>Server</NotificationTechnique>
</Source>
</ProactiveCaching>
<EstimatedRows>1013</EstimatedRows>
<AggregationDesignID>AggregationDesign1</AggregationDesignID>
</Partition>
</ObjectDefinition>
</Alter>

Aggregation processing and files

Aggregation files are created when you process partitions using ProcessFull or ProcessIndexes.

Each partition has a Process Mode property that can be set to either Regular or Lazy Aggregations. With Regular processing, aggregations are calculated as part of the partition processing. If you use the Lazy Aggregations option then aggregations are processed using a background thread after partition processing is complete.

The advantage of the Lazy Aggregations option is that your users can start querying the partition as soon as data is loaded. However, query performance won’t benefit from the aggregations until they are calculated. You can find aggregation files under each partition folder. The files will be called N.agg.rigid.data and N.agg.flexible.data, where N represents the file’s version number.

Aggregations can be rigid or flexible depending on the type of attribute relationships you setup in your hierarchies. If you aggregate an attribute that has a flexible relationship with the hierarchy’s granularity attribute, the resulting aggregation is flexible. If relationship to the granularity attribute is rigid then the resulting aggregation is rigid.

From a query performance perspective, rigid and flexible aggregations behave identically. However, SSAS treats the two types of aggregations differently during incremental updates of dimensions. Flexible aggregations are dropped during dimension processing and must be rebuilt, while rigid aggregations are updated as part of processing.

The ProcessIndexes option processes indexes on a partition that the data has already been loaded using (the ProcessData option). If you want to delete existing aggregation files, you can use the ProcessClearIndexes option.

Building aggregations

You can build aggregations with BIDS and SSMS wizards.

With the Aggregation Design Wizard, you can pick single or multiple partitions, customize aggregation usage settings for each attribute, and customize the level of performance gain you expect from the new aggregation design.

The Usage Based Optimization Wizard is similar, but it also examines the MDX query log stored in a SQL Server database to help it decide which combination of attributes should be aggregated.

The cube structure tab in BIDS allows you to set the aggregation usage property for each attribute. To do this, simply select a cube dimension, choose an attribute and then set its aggregation usage property.

This property can take one of the following values:

  • None – aggregation wizards will NOT consider this attribute for any aggregation
  • Full – aggregation wizards will consider this attribute for all aggregations
  • Unrestricted – no restrictions are provided for aggregation wizards
  • Default – aggregation wizards will use default rules: if this is a key attribute it will be considered for all aggregations; non-key attributes will be treated as unrestricted for aggregation consideration

    Aggregation Design and Usage Based Optimization wizards run a complex set of algorithms to determine the expected size of the aggregation and its perceived performance benefit.

    Microsoft recommends that aggregation files be no greater than one-third of the partition data file size. The theory is if aggregation file size exceeds one-third of the data file size then its footprint on disk utilization will outweigh any performance benefit.

    In some situations, you may need more granular control over the aggregations. This is when it is useful to know how to use the BIDS aggregations’ tab or Aggregation Manager. As you saw earlier in this article, BIDS 2008 allows you to add attributes to existing aggregations by checking the attribute. You can also create new aggregations, copy an aggregation or delete an aggregation from an existing aggregation design.

    Aggregation Manager allows you to add aggregations using the query log. Simply right-click on the Aggregation Designs folder for the measure group you wish to create aggregations for and choose Add from query log.

    A screen, similar to the one below, appears.

    Here, you can specify the SQL Server connection parameters and the SQL statement used to retrieve the attribute bitmap from the query log table. You can also customize the aggregation design name as well as aggregation prefix. Easily identifiable prefixes make it simpler to identify the custom designed aggregations when you review query’s output in Profiler.

    With Aggregation Manager you can create aggregations specifically for the troublesome query.

    For example, say you have 10 queries that take particularly long because they scan large partitions. Truncate the query log table on your test server, then run the poorly performing MDX queries and make sure they’re logged. Note that each query could examine multiple sub-cubes and therefore could create multiple records in a query log table.

    Next create aggregations from the query log using Aggregation Manager. Clear the cache, re-run queries and monitor execution using Profiler. This time, they will use aggregations and be much faster.

    Aggregations can add a significant footprint to your database size. It is important to keep in mind — particularly if you synchronize your databases — that the more aggregations you have, the longer it will take to synchronize the database.

Posted in SQL | Tagged: , , , , , , | Leave a Comment »