Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘Load balancing’

New features in Windows 2012

Posted by Alin D on August 31, 2012

With Windows Server 2012 (formerly “Windows Server 8”) on the horizon and many IT shops mulling upgrades, it’s more likely upgrades to Server 2012 will be incremental rather than all-at-once. It’s likely that those with infrastructure built on top of Windows Server will have both Server 2012 and older versions of Server running side-by-side for some time.

Given that, here are a few answers to common questions about how the new and older versions of Windows Server might have coexistence issues.

Can I run Windows Server 2012 systems in a cluster with earlier versions of Windows Server?

The short answer is “no.” There are several reasons for this, not least of which are the major improvements in the way clustering is managed and deployed across servers in Windows Server 2012. The new clustering features aren’t backward-compatible with earlier versions of Windows Server, so clusters can’t be upgraded in a “rolling” fashion; each node in a cluster has to be evicted from the cluster, upgraded to Windows Server 2012 and added to a whole new cluster of 2012-only servers.

Here are some of the key new clustering features in Windows Server 2012, which will not be supported by earlier versions of the operating system:

Storage migration. This allows cluster-managed VMs to be live-migrated to a new location while the VM is up and running, in much the same manner as VMware’s vMotion.

Clustered shared volumes. This feature is not new to Server 2012 — it was introduced in Windows Server 2008 R2 — but it’s been revised and expanded, and the expanded functionality is not available for previous versions of Server. Multiple nodes in the same cluster can share the same file system, which allows a VM hosted on any node in that cluster to be migrated to any other node on that cluster.

Cluster-aware updating (CAU). Updates  to machines in a Windows Server 2012 cluster can be applied automatically in a rolling fashion. This way, the whole cluster remains online during the process.  Plugins that talk to an API expand CAU’s behavior.

There are many other new features, but to use them uniformly across a cluster requires  a cluster-wide upgrade to Windows Server 2012.

What do I need to know about using file shares between Windows Server 2012 and earlier versions of Windows Server?

Windows Server 2012 uses the new SMB 3.0 protocol (originally SMB 2.2) for establishing file shares between Windows systems.

SMB 3.0 clients will always attempt to negotiate the highest possible level of the protocol with any peer it connects with, so if you establish a share between Windows Server 2012 and earlier versions of Windows Server, the connection will be negotiated according to whatever level of SMB is available on the other server. Microsoft TechNet blogger Jose Barreto has a post with a chart that spells out the highest grade of SMB available to a connection negotiated between any two editions of Windows.

SMB 3.0’s new features are only available to other Windows Server 2012 or Windows 8 systems. Some of the new features include:

Scale-out. The same folder can be shared from multiple nodes in a cluster for the sake of failover, better use of bandwidth, dynamic capacity scaling, load balancing and fault tolerance.

Multichannel support. Any multiple, redundant network links between SMB peers can be used to accelerate the connection.

End-to-end encryption. Data sent between SMB 3.0 peers is encrypted by default.

VSS support. SMB shares are now covered by volume shadow copies as well, so data on file shares can also be backed up and restored through any VSS-aware software.

SMB Direct. Servers that use RDMA-capable network adapters can enjoy high-speed memory-to-memory data transfers with far less CPU usage and latency than conventional copy operations.

SMB directory leasing. This feature reduces latency for documents accessed via the Branch Cache feature, by locally caching more of the metadata associated with the document and reducing the amount of roundtrips to the original server.

Note that if you have a mixed infrastructure where all the clients and servers use SMB 2 or better — Windows Vista on the client side, Windows Server 2008 on the server side — disable the use of SMB 1.x with the PowerShell command Set-SmbServerConfiguration –EnableSMB1Protocol $false. Disabling SMB 1.x reduces the potential attack surface for the server. If the protocol isn’t in use, it’s best to disable it to prevent a possible future exploit from being used on it.

What Windows Server features are being deprecated in Windows Server 2012?

Some features in Windows Server are no longer supported as of Windows Server 2012, or are in the process of being removed. Most of these deprecations only involve code or applications that run directly on the new OS, rather than interoperations with other editions. That said, there are exceptions especially if, for instance, you have an older application that expects the same behavior when it tries to interoperate with the newer version of Server.

Here’s a list of some of the major deprecations and feature removals in Windows Server 2012 (with more listed at TechNet), which may impact cross-server compatibility or applications running on other servers:

Clustering. 32-bit cluster resource DLLs are being deprecated and should be replaced with their 64-bit counterparts whenever possible. Also, if you have any programs that use the Cluster Automation Server (MSClus) COM API, be aware that this API is now only available via an optional component named FailoverCluster-AutomationServer, which isn’t installed by default.

Databases. 16- and 32-bit ODBC support has been removed, as have ODBC and OLEDB drivers for Oracle and Jet Red databases. (Use vendor-supplied database connectors.) ODBC/OLEDB support is also being canned for any versions of SQL Server beyond 2000; for those editions of SQL Server and higher, use SQL Native Client instead. Finally, no version of SQL Server earlier than 7.0 is supported at all. It’s unlikely that anyone is still running SQL Server 6.5 or earlier, but any attempts to connect to a SQL Server 6.5 (or earlier) instance from Windows Server 2012 will generate an error.

Active Directory. Support for resource groups and using Active Directory Lightweight Directory Services as an authentication store have been deprecated.

UNIX. Many UNIX subsystem features are being deprecated or removed. Microsoft entire SUA POSIX subsystem is being deprecated, along with the line printer daemon protocol that is often used by UNIX clients. As a general replacement for Microsoft’s UNIX features consider using the Cygwin or MinGW,  open source tools and APIs that are maintained entirely apart from Windows’s own evolution.

WMI. Many individual WMI providers are being removed or deprecated: SNMP (because SNMP itself is deprecated); the WMI provider for Active Directory (eclipsed by PowerShell), and the Win32_ServerFeature API.

Finally, the Windows Help application (winhlp32.exe) has also been removed although it has not shipped with Windows Server since Windows Server 2008. What’s more, no add-on version of the Windows Help program is being supplied through Microsoft as a download, as it did with previous versions of Windows that omitted Windows Help. (However, a Windows Help edition for the client edition of Windows 8 will be made available later, which should do the job.)

Posted in Windows 2012 | Tagged: , , , , , , | Leave a Comment »

Install WSUS server on Hyper-V virtual machine

Posted by Alin D on June 27, 2012

As organizations continue to move away from the use of physical servers, a frequent question arises:   Is it a good idea to virtualize WSUS servers?  Short answer: yes. Read on to find out how to run WSUS in a Hyper-V machine.

Will WSUS run in a virtual machine?

In a word, yes. If you plan on hosting a WSUS virtual machine on Hyper-V, it is generally recommended that you run WSUS on top of the Windows Server 2008 R2 operating system. In order to do that, you will have to deploy WSUS 3 SP2. Until SP2, WSUS did not work properly with Windows Server 2008 R2, and it did not support the management of Windows 7 clients.

What is the easiest way to virtualize a WSUS server?

If you are currently running WSUS 3 on a physical server then I would recommend doing a migration upgrade. To do so, set up a virtualized WSUS server and then configure it to be a replica of your physical WSUS server and then perform synchronization. Once the sync process completes reconfigure the virtual WSUS server to be autonomous. Then, you can decommission your physical WSUS server.

This technique offers two main advantages. First, it makes it easy to upgrade the WSUS server’s operating system if necessary. The other advantage is that this method offers far less down time than a standard P2V conversion because your physical WSUS server continues to service users while your virtual WSUS server is being put into place.

What kind of capacity can I get from a virtualized WSUS server?

A single WSUS server should be able to handle up to 25,000 clients. However, this assumes that sufficient resources have been provisioned and that SQL Server is running on a separate server (physical or virtual). Some organizations have been able to achieve higher capacities by using multiple front-end servers.

What are the options for making WSUS fault-tolerant?

In a physical server environment, WSUS is made fault-tolerant by eliminating any single points of failure. Normally you would create a Network Load Balancing (NLB) cluster to provide high availability for your WSUS servers. Of course WSUS is dependent on SQL Server and the preferred method for making SQL Server fault-tolerant is to build a failover SQL Server cluster.

While it is possible to recreate this high-availability architecture in a Hyper-V infrastructure, it is usually considered to be a better practice to build a Hyper-V cluster instead.  If your host servers are clustered then clustering your WSUS servers and your SQL servers becomes unnecessary (at least from a fault tolerance standpoint).

If Hyper-V hosts are not clustered (and building a Hyper-V cluster is not an option for whatever reason) then I would recommend going ahead and creating a clustered architecture for the virtualized WSUS and SQL servers. However, you should make sure not to place multiple WSUS or SQL servers onto a common Hyper-V server because doing so will undermine the benefits of clustering WSUS and SQL Server.

What do I need in terms of network bandwidth?

There are no predetermined rules for providing network bandwidth to a virtualized WSUS server. Keep in mind, however, that there are a number of different issues that can occur as a result of insufficient bandwidth. If at all possible, I would recommend dedicating a physical network adapter to your virtual WSUS server. If you are forced to share a network adapter across multiple virtual servers then use network monitoring tools to verify that the physical network connection isn’t saturated.

If saturation becomes an issue, remember that WSUS can be throttled either at the server itself or at the client level through the use of group policy settings. You can find client throttling policies in the Group Policy Object Editor at Computer Configuration> Administrative Templates > Network > Background Intelligent Transfer Service.

Are there any special considerations for the SQL database?

It is generally recommended to run SQL Server on a separate machine (physical or virtual) so that you can allocate resources directly to the database server. I also recommend running the Cleanup Wizard and defragmenting the database every couple of months. Doing so will help the database to run optimally, which is important in a virtualized environment.

Another thing to keep in mind is that SQL Servers tend to be I/O intensive. Therefore, if you are planning to virtualize your SQL server then you might consider using dedicated physical storage so that the I/O load generated by SQL does not impact other virtual machines.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

How to use Windows Network Load Balancing to load balance Exchange 2010

Posted by Alin D on November 13, 2011

When administrators consider load balancing their Exchange 2010 installations, they often turn to dedicated — and frequently expensive — hardware products. Fortunately, if you’re Linux-savvy, a free load-balancing option is available. If not, that’s alright, help is on the way.

You can use Windows Network Load Balancing to load balance Exchange, but several limitations make it impractical for certain Exchange deployments. For example, you can’t add more than eight nodes in a Network Load Balancing cluster. You also can’t combineWindows Failover Clustering and Network Load Balancing because they can’t interact with each other.

In cases like these, you need external assistance. Help usually comes in the form of hardware-based load balancers. Unfortunately, those products aren’t cheap. Prices typically start around $1,500 for low-end models and quickly soar into the tens of thousands of dollars.

Most companies don’t have to spend that kind of money though. You can use a free virtual-software appliance that acts as a load balancer. This appliance can be installed on a repurposed server or even in a virtual machine (VM) on shared hardware. All you’re really “spending” is the time and effort to get it up and running.

Your free load-balancing options for Exchange 2010
One such appliance is HAProxya Linux-based Layer 4 load balancer for TCP and HTTP applications. There are already a number of third-party products like redWall’s Firewall and Exceliance’s HAPEE distribution that use the tool, as well as many satisfied users — the Fedora Project, Reddit, StackOverflow and many more.

You must be comfortable with Linux to use HAProxy in your Exchange 2010 production environment. If not, Microsoft-certified systems administrator Steve Goodman created the Exchange 2010 HAProxy Virtual Load Balancer.

The appliance is a pre-packaged version of HAProxy, built on Ubuntu Linux, that can be deployed on VMware vSphere orMicrosoft Hyper-V with minimal work required by an Exchange administrator.

All you need is a solid understanding of your network topology and some familiarity with either VMware or Hyper-V. While you don’t need to fully understand Linux to install Goodman’s appliance, it does help to know about the OS if you want to fine-tune aspects of the tool that aren’t available through the Web interface. That said, you can get the HAProxy Virtual Load Balancer up and running in your Exchange 2010 lab environment without being a Linux expert.

The appliance comes in two formats: a VMware vSphere .ovf file and a Hyper-V-compatible .vhd file. The tool’s website contains step-by-step instructions on how to set up HAProxy on either vSphere or Hyper-V.

Setting up the Exchange 2010 HAProxy Virtual Load Balancer
Boot the appliance and you’re greeted with a simple console login screen. To begin, type inroot as your username and setup as your password. You will be prompted to choose a new password. This secures the setup process; you can change the password later on.

Next comes the most important part of the setup. You must set the IP address, netmask and default gateway for HAProxy. If you mistype anything, press Ctrl+C to get out of the script, type logout to leave, then log back in. Remember to use your new password, then repeat the login process. After you complete the first step, you will be given a URL; make sure to write it down. You will be prompted to log back in when HAProxy reboots.

The rest of the setup process — as well as most HAProxy management — is done through HAProxy’s Web interface. Configure the static RPC ports for your client access servers, then list the IP addresses of each of the client access servers you want to balance. You must also set the time zone and the network time protocol (NTP) servers. Don’t touch the console login screen unless there’s an overwhelming reason to do so.

While the HAProxy Virtual Load Balancer has been through plenty of development, the virtual appliance is still a work in progress. For example, HAProxy is a Layer 4 (TCP) balancer, not a Layer 7 (application-level) balancer. It is not completely “Exchange-aware,” so it can’t do things like application-level monitoring or SSL offloading — at least, not yet.

These items may eventually be added, and it sounds like Goodman plans to further improve the tool. ”Subsequent versions will be production ready, as this is totally aimed at being an easy-to-use free alternative to paid-for hardware and virtual load balancers for Exchange 2010,” Goodman said.

 

Posted in TUTORIALS | Tagged: , , , , , , , , , , , | Leave a Comment »

Server load balancing in Exchange 2010 hub transport

Posted by Alin D on September 26, 2011

Microsoft encourages failover clustering to provide fault tolerance and redundancy for Exchange Server. However, neither the client access server nor the hub transport server roles support failover clustering in Exchange 2010 — only mailbox servers may be clustered.

To provide fault tolerance and redundancy for your hub transport servers, you must deploy one or more additional hub transport servers. When you do, Exchange 2010 automatically distributes the workload across your hub transport servers. This way, you don’t have to worry about building a cluster or configuring Exchange to use the new server.

While providing redundancy for the hub transport server role in Exchange 2010 is fairly simple, there are a couple of important factors to remember when it comes to effective load balancing.

Each mailbox server requires a hub transport server


The hub transport server resides at the Active Directory-site level. This means that every Active Directory (AD) site that contains an Exchange mailbox server also needs at least one hub transport server. You can’t set up a single hub transport server and expect it to service a multi-site.

Exchange Server deployment.

Because the hub transport server functions at the AD-site level, redundancy and fault tolerance must also occur there. Deploying redundant hub transport servers help provide load balancing and fault tolerance for a single site, not the entire Exchange Server organization.

The importance of Exchange Server 2010 Service Pack 1


The other important thing that you need to know about protecting the hub transport server role is that if you’re using Exchange Server 2010 and have hub transport servers in multiple sites, Exchange 2010 Service Pack 1 (SP1) is essential. If you haven’t installed Exchange 2010 SP1, a hub transport server failure will result in uneven workload distributions.

When Exchange 2010 routes messages to another AD site, it uses a technique similar to domain name system (DNS) round robin to distribute the load among the hub transport servers in the remote site. For example, if a remote site contains five hub transport servers, each server receives approximately 20% of the messages sent from the local site.

With that in mind, imagine that one of the five hub transport servers fails. The hub transport servers in the local site are completely unaware of the failure in the remote site. Therefore, the servers continue distributing the workload to all the remote hub transport servers. The connections to the offline hub transport server fail, but once the failure is detected, Exchange routes the messages to the next hub transport server.

The problem here is that the next hub transport server in line must take on the failed server’s workload in addition to its own. This behavior is normal for sites with only two hub transport servers, but for sites with three or more hub transport servers, it results in uneven load balancing. Consider our example: If a failure occurred in a site with five hub transport servers, one of the remaining servers would need to handle 40% of the total workload, while the three remaining servers still received only 20% of the total workload.

Exchange 2010 SP1 is essential because it includes a feature called the Healthy Server Selector, which tracks which Exchange servers are available. If a hub transport server in a remote site fails, the Healthy Server Selector discovers the failure and prevents the local hub transport servers from sending messages to the failed server until it comes back online. This helps Exchange ensure that workloads are evenly distributed among the remaining hub transport servers.

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

How to Configure DNS Server Settings in Windows 2008

Posted by Alin D on June 28, 2011

When you install the DNS server role on a Windows Server 2008 or Windows Server 2008 R2 computer, the DNS Manager Microsoft Management Console (MMC) snap-in is automatically installed, providing you with all the tools required to manage and administer DNS. When you install AD DS the DNS zones needed for administering DNS in the AD DS domain are added to your DNS installation. This section introduces you to server-specific settings that you can configure

from the DNS server’s Properties dialog box.

From the DNS Manager snap-in, right-click the DNS server and choose Properties to display the dialog box shown in image below. This dialog box enables you to configure a comprehensive range of server-specific properties. The more important properties are discussed in this section.

 

Forwarding

The act of forwarding refers to the relaying of a DNS request from one server to another one when the first server is unable to process the request. This is especially useful in resolving Internet names to their associated IP addresses. By using a forwarder, the internal DNS server passes off the act of locating an external resource, thereby reducing its processing load and network bandwidth. The use of forwarding is also helpful for protecting internal DNS servers from access by unauthorized Internet users. It works in the following manner:

Step 1. A client issues a request for a fully qualified domain name (FQDN) on a zone for which its preferred DNS) server is not authoritative (for example, an Internet domain such as http://www.google.com).

Step 2. The local DNS server receives this request but has zone information only for the internal local domain and checks its list of forwarders.

Step 3. Finding the IP address of an external DNS server (such as one hosted by the company’s ISP), it forwards the request to the external server (forwarder).

Step 4. The forwarder attempts to resolve the required FQDN. Should it not be able to resolve this FQDN, it forwards the request to another forwarder.

Step 5. When the forwarder is able to resolve the FQDN, it returns the result to the internal DNS server by way of any intermediate forwarders, which then returns the result to the requesting client.

You can specify forwarders from the Forwarders tab of the DNS server’s Properties dialog box, as shown in Figure 4-2. Click Edit to open the Edit Forwarders dialog box shown in Figure 4-3. In the space provided, specify the IP address of a forwarder and click OK or press Enter. The server will resolve this IP address to its FQDN and display these in the Forwarders tab. You can also modify the sequence in which the forwarding servers are contacted by using the Up and Down command buttons, or you can remove a forwarding server by selecting it and clicking Delete.

You can also specify forwarders from the command line by using the dnscmd command. Open an administrative command prompt and use the following command syntax:

dnscmd ServerName /ResetForwarders MasterIPaddress … [/TimeOut Time] [/Slave]

The parameters of this command are as follows:

ServerName: Specifies the DNS hostname of the DNS server. You must include this parameter; use a period to specify the local computer.

/ResetForwarders: Indicates that you are configuring a forwarder.

MasterIPaddress …: Specifies a space-separated list of one or more IP addresses of DNS servers to which queries are forwarded.

/TimeOut: Specifies a timeout setting in seconds.

/Slave: Determines whether the DNS server uses recursion when querying for the domain name specified by ZoneName.



Conditional Forwarders

You can configure a DNS server as a conditional forwarder. This is a DNS server that handles name resolution for specified domains only. In other words, the local DNS server will forward all the queries that it receives for names ending with a specific domain name to the conditional forwarder. This is especially useful in situations where users in your company need access to resources in another company with a separate AD DS forest and DNS zones, such as a partner company. In such a case, specify a conditional forwarder that directs such queries to the DNS server in the partner company while other queries are forwarded to the Internet. Doing so reduces the need for adding secondary zones for partner companies on your DNS servers.

The DNS snap-in provides a Conditional Forwarders node where you can specify forwarding information. Use the following procedure to specify conditional forwarders:

Step 1. Right-click the Conditional Forwarders node and choose New Conditional Forwarder

Step 2. Type the DNS domain that the conditional forwarder will resolve and the IP address of the server that will handle queries for the specified domain.

Step 3. If you want to store the conditional forwarder information in AD DS, select the check box provided and choose an option in the drop-down list that specifies the DNS servers in your domain or forest that will receive the conditional forwarder information. Then click OK.

Information for the conditional forwarder you have configured is added beneath the Conditional Forwarders node in the DNS Manager snap-in. Name queries for the specified DNS domain will now be forwarded directly to this server.

Root Hints

Whenever a DNS server is unable to resolve a name directly from its own database or with the aid of a forwarder, it sends the query to a server that is authoritative for the DNS root zone. Recall from Chapter 2 that the root is the topmost level in the DNS hierarchy. The server must have the names and addresses of these servers stored in its database to perform such a query. These names and addresses are known as root hints, and they are stored in the cache.dns file, which is found at %systemroot%system32dns. This is a text file that contains NS and A records for every available root server.

When you first install DNS on a server connected to the Internet, it should download the latest set of root hints automatically. You can verify that this has occurred by checking the Root Hints tab of the server’s Properties dialog box. You should see a series of FQDNs with their corresponding IP addresses.

If your internal DNS server does not provide access to Internet name resolution, you can improve network security by configuring the root hints of the internal DNS servers to point to the DNS servers that host your root domain and not to Internet root domain DNS servers. To modify the configuration on this tab, perform one or more of the following actions:

Click Add to display the New Name Server Record dialog box, from which you can manually type the FQDNs and IP addresses of one or more authoritative name servers.

Select an entry and click Edit to display the Edit Name Server Record dialog box, which enables you to modify it or add an additional IP address to an existing record.

Select an entry and click Remove to remove a record.

Click Copy from Server to copy a list of root hints from another DNS server. Type the DNS name or IP address in the dialog box that appears. This action is useful if your server was not connected to the Internet at the time DNS was installed.

Although this is not a recommended action, you can also edit the cache.dns file using a text editor such as Notepad.

NOTE: You can also use the Configure a DNS Server Wizard to configure root hints for your server. Right-click your server in the console tree of the DNS Manager snap-in and choose Configure a DNS Server. Then select the Configure root hints only (recommended for advanced users only) option from the Select Configuration Action page of the wizard.

Configuring Zone Delegation

As you have seen, you can divide your DNS namespace into a series of zones. You can delegate management of these zones to another location or workgroup within your company by delegating the management of the respective zone. Configuring zone delegation involves creating delegation records in other zones that point to the authoritative DNS servers for the zone being delegated. Doing so enables you to transfer authority as well as providing correct referral to other DNS servers and clients utilizing these servers for name resolution.

The Zone Delegation benefits:

You can delegate the administration of a portion of your DNS namespace to another office or department in your company.

You can subdivide your zone into smaller zones for load balancing of DNS traffic among multiple servers. This also enables improved DNS name resolution performance and fault tolerance.

You can extend the namespace by adding additional subdomains for purposes such as adding new branch offices or sites.

You can use the New Delegation Wizard to create a zone delegation. The wizard uses the information you supplied to create name server (NS) and host (A or AAAA) resource records for the delegated subdomain. Perform the following procedure:

Step 1. Right-click the parent zone in the console tree of DNS Manager and

choose New Delegation. This starts the New Delegation Wizard.

Step 2. Click Next and then enter the name of the delegated subdomain.

Step 3. As shown in next screenshot, the wizard appends the parent zone name to form the FQDN of the domain being delegated. Click Next and then click Add.

Step 4. In the New Name Server Record dialog box, type the FQDN and IP address of the DNS server that is authoritative for the subdomain and then click OK. Repeat if necessary to add additional authoritative DNS servers.

Step 5. The servers you’ve added are displayed on the Name Servers page of the wizard. When finished, click Next and then click Finish.

You can also use the dnscmd utility to perform zone delegation from the command line. Open an administrative command prompt and use the following command:

dnscmd ServerName /RecordAdd ZoneName NodeName [/Aging] [/OpenAcl] [Ttl] NS {HostName|FQDN}

Debug Logging

The DNS server also supports debug logging of packets sent to and from the DNS server to a text file named dns.log. This file is stored in the %systemroot%system32dns folder. To configure logging, right-click the server in the DNS Manager snap-in and choose Properties. Click the Debug Logging tab to receive the dialog box.

By default, no logging is configured. Select the Log packets for debugging check box, which makes all other check boxes available.

To view the DNS log, first stop the DNS service by right-clicking the DNS server in DNS Manager and choosing All Tasks > Stop. Then open the log in either Notepad or WordPad. When you are finished, restart the DNS service by right clicking the DNS server and choosing All Tasks > Start.

Event Logging

The Event Logging tab of the DNS server’s Properties dialog box enables you to control how much information is logged to the DNS log, which appears in Event Viewer. You can choose from one of the following options:

No events: Suppresses all event logging (not recommended).

Errors only: Logs error events only.

Errors and warnings: Logs errors and warnings only.

All events: Logs informational events, errors, and warnings. This is the default. Choosing either the Errors only or Errors and warnings option might be useful to reduce the amount of information recorded to the DNS event log.

DNS Security Extensions

DNS in itself is vulnerable to certain types of intrusions such as spoofing, man-in-the-middle, and cache-poisoning attacks. Because of this, DNS Security Extensions (DNSSEC) was developed to add additional security to the DNS protocol. Outlined in Requests for Comments (RFCs) 4033, 4034, and 4035, DNSSEC is a suite of DNS extensions that adds security to the DNS protocol by providing origin authority, data integrity, and authenticated denial of existence. Although an older form of DNSSEC was used in Windows Server 2003 and the first iteration of Windows Server 2008, DNSSEC has been updated completely according to the specifications in the just-mentioned RFCs. The newest form of DNSSEC is available for Windows Server 2008 R2 and Windows 7 only.

DNSSEC enables DNS servers to use digital signatures to validate responses from other servers and resolvers. Signatures are stored in a new type of resource record called DNSKEY within the DNS zone. On resolving a name query, the DNS server includes the appropriate digital signature with the response, and the signature is validated by means of a preconfigured trust anchor. A trust anchor is a preconfigured public key associated with a specific zone. The validating server is configured with one or more trust anchors. Besides DNSKEY, DNSSEC adds RRSIG, NSEC, and DS resource records to DNS. You can view zones that are signed with DNSSEC in the DNS Manager tool, and you can view the trust anchors from the Trust Anchors tab of the DNS server’s Properties dialog box.

To specify a trust anchor, click Add. Provide the information requested in the New Trust Anchor dialog box, including its name and public key value, and then click OK. The public key value must be formatted as a Base64 encoding value; for more information on the public key, refer to http://www.rfc-archive.org/getrfc.php?rfc=4034 Doing so adds the trust anchor to the Trust Anchors tab and enables its use for signing DNS query responses.

Advanced Server Options

The Advanced tab of the DNS server’s Properties dialog box contains a series of options that you should be familiar with.

Server Options

The Server options section of this dialog box contains the following six options, the last three of which are selected by default:

Disable recursion: Prevents the DNS server from forwarding queries to other DNS servers, as described later in this section. Select this check box on a DNS server that provides resolution services only to other DNS servers because unauthorized users can use recursion to overload a DNS server’s resources and thereby deny the DNS Server service to legitimate users.

BIND secondaries: During zone transfer, DNS servers normally utilize a fast transfer method that involves compression. If UNIX servers running a version of Berkeley Internet Name Domain (BIND) prior to 4.9.4 are present, zone transfers will not work. These servers use a slower uncompressed data transfer method. To enable zone transfer to these servers, select this check box.

Fail on load if bad zone data: When selected, DNS servers will not load zone data that contains certain types of errors. The DNS service checks name data using the method selected in the Name Checking drop-down list on this tab.

Enable round robin: Enables round robin, as described later in this section.

Enable netmask ordering: Prioritizes local subnets so that when a client queries for a hostname mapped to multiple IP addresses, the DNS server preferentially returns an IP address located on the same subnet as the requesting client.

Secure cache against pollution: Cache pollution takes place when DNS query responses contain malicious items received from nonauthoritative servers. This option prevents attackers from adding such resource records to the DNS cache. The DNS servers ignore resource records for domain names outside the domain to which the query was originally directed. For example, if you sent a query for que.com and a referral provided a name such as windows-scripting.info, the latter name would not be cached when this option is enabled.

Round Robin

Round robin is a load-balancing mechanism used by DNS servers to distribute name resolution activity among all available DNS servers. If multiple A or AAAA resource records are found in a DNS query (for example, on a multihomed computer), round robin sequences these resource records randomly in repeated queries for the same computer. An example in which round robin is useful is a situation where you have multiple terminal servers in a server farm that users access for running applications. DNS uses round robin to randomize the sequence in which users accessing the terminal servers reach given servers.

By default, round robin is enabled on Windows Server 2008 R2 DNS servers. You can verify or modify this setting from the Advanced tab of the DNS server’s Properties dialog box already shown in Figure 4-10. Select or clear the check box labeled Enable round robin as required.

Posted in TUTORIALS | Tagged: , , , , , , | 2 Comments »

How VMware and Microsoft server load-balancing services work

Posted by Alin D on June 27, 2011

Virtual server load balancing among cluster hosts is all about the math. An automated server load-balancing service calculates resource utilization, then compares one host’s available capacity with that of other hosts to determine whether a cluster needs rebalancing.

But it’s not an exact science. Various load-balancing services use different calculation models to determine whether a cluster is balanced. VMware vSphere’s Distributed Resource Scheduler (DRS) feature, for example, uses different metrics than does Microsoft System Center Virtual Machine Manager’s Performance and Resource Optimization (PRO) feature. Ultimately, however, admins need a combination of performance monitoring and calculations before they live-migrate a virtual machine (VM) for load balancing.

Most of us leave cluster load balancing to an automated load-balancing service, but it’s important to understand the calculations that service uses. Understanding these metrics indicates when a load-balancing service should be tuned for better results. Plus, you’re better able to recognize when a vendor’s server load-balancing offering isn’t true load balancing.

 Distributed Resource Scheduler  (DRS)- VMWare Load-balancing Service

The VMware DRS load-balancing service uses two metrics to determine whether a cluster is out of balance. When a host’s current host load standard deviation number is greater than the target host load standard deviation, DRS recognizes that the host is unbalanced with the rest of the cluster. To rebalance the cluster, DRS usually uses vMotion to migrate VMs off an overloaded host.

These server load-balancing metrics reside in the VMware DRS pane inside the vSphere Client. DRS gathers its values by analyzing each host’s CPU and memory resources to determine a load level. Then, the load-balancing service determines an average load level and standard deviation from that average. As long as vSphere is operational, DRS re-evaluates its cluster load every five minutes to check for balance.

If the load-balancing service determines that rebalancing is necessary, DRS prioritizes which virtual machines need to be rebalanced across a cluster. Using the following equation, the service calculates a host’s balance compared with other hosts in the cluster.

The following  equation determines cluster load balancing.

A perfectly balanced cluster reports a zero for its current host load standard deviation. That means the host is balanced with the others in the cluster. If that number increases, it means the VMs on one server require additional resources than the average and that the total resources on the host are unbalanced from the levels on other hosts.

DRS then makes prioritized recommendations to restore balance. Priority-one recommendations should be implemented immediately, while priority-five recommendations won’t do much to fix the imbalance.

Microsoft’s Performance and Resource Optimization – Server load balancing

Microsoft’s System Center Virtual Machine Manager (SCVMM) takes a different approach to cluster load balancing. Natively, it doesn’t take into account aggregate cluster conditions when calculating resource utilization. Its load-balancing service, PRO, considers only overutilization on individual hosts.

You should also note some important conditions with SCVMM. Neither Hyper-V nor SCVMM alone can automatically relocate VMs based on performance conditions. SCVMM can relocate virtual machines only after it has been integrated with System Center Operations Manager (SCOM) and once PRO is enabled. That’s because SCVMM requires SCOM to support VM monitoring.

In SCVMM 2008 R2, if host resources are overloaded, virtual machines can be live-migrated off a cluster host. According to a Microsoft TechNet article, SCVMM recognizes that a host is overloaded when memory utilization is greater than “physical memory on the host minus the host reserve value for memory on the host.” It also recognizes when CPU utilization is greater than “100% minus the host reserve for CPU on the host.”

Neither server load-balancing calculation aggregates metrics throughout the cluster to determine resource balance. But SCVMM uses a per-host rating system that determines where to live-migrate VMs once a host is overloaded. The system uses four resources in its algorithm: CPU, memory, disk I/O capacity and network capacity. You can prioritize these resources with a slider in the SCVMM console.

There’s also an alternative solution for server load balancing: a PowerShell script that analyzes cluster conditions. Running the script balances virtual machines across a cluster by comparing the memory properties of hosts and VMs in the cluster.

Load-balancing services use numerous calculations to determine whether clustered VMs are balanced. But if you don’t understand how your service computes these metrics, server load balancing is tricky. Even if you’re not a math whiz, these metrics help prevent load-balancing problems.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

SQL Azure Services – A full overview

Posted by Alin D on May 11, 2011

SQL Azure is a database service in the cloud on Microsoft’s Windows Azure platform well-suited for web facing database applications as well as a relational database in the cloud.

The present version mostly deals with the component analogous to a database engine in a local, on-site SQL Server. Future enhancements will host the other services such as Integration Services, Reporting Services, Service Broker, and any other yet-to-be defined services. Although these services are not hosted in the cloud, they can leverage data on SQL Azure to provide support. SQL Server Integration Services can be used to a great advantage with SQL Azure for data movement, and highly interactive boardroom quality reports can be generated using SQL Azure as a backend server.

Infrastructure features

SQL Azure is designed for peak workloads by failover clustering, load balancing, replication, and  scaling out, which are all automatically managed at the data center. SQL Azure’s infrastructure architecture is fashioned to implement all of these features.

High availability is made possible by replicating multiple redundant copies to multiple physical servers thus, ensuring the business process can continue without interruption. At least three replicas are created; a replica can replace an active copy facing any kind of fault condition so that service is assured. At present, the replicated copies are all on the same data center, but in the future, geo-replication of data may become available so that performance for global enterprises may be  improved. Hardware failures are addressed by automatic failover.

Enterprise data centers addressed the scaling out data storage needs, but incurred administrative overheads in maintaining the on-site SQL Servers. SQL Azure offers the same or even better functionality without incurring administrative costs.

How different is SQL Azure from SQL Server?

SQL Azure (version 10.25) may be viewed as a subset of an on-site SQL Server 2008 (version 10.5) both exposing Tabular Data Stream (TDS) for data access using T-SQL. As a subset, SQL Azure supports only some of the features of SQL Server and the T-SQL feature set. However, more of T-SQL features are being added in the continuous upgrades from SU1 to SU5. Since it is hosted on computers in the Microsoft Data Centers, its administration—in some aspects—is different from that of an on-site SQL Server.

SQL Azure is administered as a service, unlike on-site servers. The SQL Azure server is not a SQL Server instance and is therefore administered as a logical server rather than as a physical server. The database objects such as tables, views, users, and so on are administered by SQL Azure database administrator but the physical side of it is administered by Microsoft on its data centers. This abstraction of infrastructure away from the user confers most of its availability, elasticity, price, and extensibility features. To get started with SQL Azure, you must provision a SQL Azure Server on Windows Azure platform as explained in the After accessing the portal subsection, later in the article.

SQL Azure provisioning

Provisioning a SQL Azure Server at the portal is done by a mere click of the mouse and will be ready in a few minutes. You may provision the storage that you need, and when the need changes, you can add or remove storage. This is an extremely attractive feature especially for those whose needs start with low storage requirements and grow with time. It is also attractive to those who may experience increased load at certain times only.

SQL Azure databases lie within the operational boundary of the customer-defined SQL Azure Server; it is a container of logical groupings of databases enclosed in a security firewall fence. While the databases are accessible to the user, the files that store the relational data are not; they are managed by the SQL Azure services.

A single SQL Azure Server that you get when you subscribe, can house a large number (150) of databases, presently limited to the 1 GB and 10 GB types within the scope of the licensing arrangement.

• What if you provision for 1 GB and you exceed this limit?

Then either you provision a server with a 10 GB database or get one more 1 GB database. This means that there is a bit of due diligence you need to do before you start your project.

• What if the data exceeds 10 GB?

The recommendation is to partition the data into smaller databases. You may have to redesign your queries to address the changed schema as cross-data­base queries are not supported. The rationale for using smaller databases and partitioning, lies in its agility to quickly recover from failures (high availabil­ity/fault tolerance) with the ability to replicate faster while addressing the issue of covering a majority of users (small business and web facing). How­ever, responding to the requests of the users, Microsoft may provide 50 GB databases in the future (the new update in June 2010 to SQL Azure Services will allow 50 GB databases).

• How many numbers of SQL Azure Servers can you have?

You can have any number of SQL Azure Servers (that you can afford) and place them in any geolocation you choose. It is strictly one server for one subscription. Presently there are six geolocated data centers that can be chosen. The number of data centers is likely to grow. Best practices dictate that you keep your data nearest to where you use it most, so that performance is optimized. The SQL Azure databases, being relational in nature, can be programmed using T-SQL skills that are used in working with on-site SQL Servers. It must be remembered though, that the SQL Azure Servers are not physical servers but are virtual objects. Hiding their physical whereabouts but providing adequate hooks to them, helps you to focus more on the design and less on being concerned with files, folders, and hardware problems. While the server-related information is shielded from the user, the databases themselves are containers of objects similar to what one finds in on-site SQL Servers such as tables, views, stored procedures, and so on. These database objects are accessible to logged on users who have permission.

After accessing the portal

To get started with SQL Azure Services, you will need to get a Windows Azure platform account, which gives access to the three services presently offered. The first step is to get a Windows Live ID and then establish an account at Microsoft’s Customer Portal. In this article, you will be provisioning a SQL Azure Server after accessing the SQL Azure Portal.

Server-level administration

Once you are in the portal, you will be able to create your server for which you can provide a username and password. You will also be able to drop the server and change the password. You can also designate in which of the data centers you want your server to be located. With the credentials created in the portal, you will become the server-level principal; the equivalent of sa of your server. In the portal, you can also create databases and firewall fences that will only allow users from the location(s) you specify here. The user databases that you create here are in addition to the master database that is created by SQL Azure Services; a repository of information about other databases. The master database also keeps track of logins and their permissions. You could get this information by querying the master for sys.sql_logins and sys.database views.

If you are planning to create applications, you may also copy the connection strings that you would need for your applications, which are available in the portal. You would be typically using the Visual Studio IDE to create applications. However, SQL Azure can be used standalone without having to use the Windows Azure service. Indeed some users may just move their data to SQL Azure for archive.

Once you have provisioned a server, you are ready to create other objects that are needed besides creating the databases. At the portal, you can create a database and set up a firewall fence, but you will need another tool to create other objects in the database.

Setting up firewall rules

Users accessing SQL Azure Server in the Cloud need to go through two kinds of barriers. Firstly, you need to go through your computer’s firewall and then go in through the firewall that protects your SQL Azure Server. The firewall rules that you set up in the portal allow only users from the location you set up for the rule, because the firewall rules only look at the originating IP address.

By default, there are no firewall rules to start with and no one gets admitted. Firewall rules are first configured in the portal. If your computer is behind a Network Address Translation (NAT) then your IP address will be different from what you see in your configuration settings. However, the user interface in the portal for creating a firewall discovers and displays the correct IP address most of the time.

A workaround is suggested here for those cases in which your firewall UI incorrectly displays your IP Address: http://hodentek.blogspot.com/2010/01/firewall-ip-address-setting-in-sql.html.

Firewalls can also be managed from a tool such as SSMS using extended stored procedures in SQL Azure. They can be managed programmatically as well from Visual Studio.

In order for you to connect to SQL Azure, you also need to open your computer’s firewall, so that an outgoing TCP connection is allowed through port 1433 by creating an exception. You can configure this in your computer’s Control Panel. If you have set up some security program, such as Norton Security, you need to open this port for outgoing TCP connections in the Norton Security Suite’s UI.

In addition, your on-site programs accessing SQL Azure Server and your hosted applications on Windows Azure may also need access to SQL Azure. For this scenario, you should check the checkbox Allow Microsoft Services access to this server in the firewall settings page.

The firewall rule only checks for an originating IP address but you need to be authenticated to access SQL Azure. Your administrator, in this case the server-level principal, will have to set you up as a user and provide you with appropriate credentials.

Administering at the database level

SQL Azure database administration is best done from SSMS. You connect to the Database Engine in SSMS, which displays a user interface where you enter the credentials that you established in the portal. You also have other options to connect to SQL Azure (Chapter 3, Working with SQL Azure Databases from Visual Studio 2010 and Chapter 4, SQL Azure Tools). In SSMS, you have the option to connect to either of the databases, the system-created master or the database(s) that you create in the portal. The Object Explorer displays the server with all objects that are contained in the chosen database. What is displayed in the Object Explorer is contextual and the use of the USE statement to change the database context does not work. Make sure you understand this, whether you are working with Object Explorer or query windows. The server-level administrator is the ‘top’ administrator and he or she can create other users and assign them to different roles just like in the on-site SQL Server. The one thing that an administrator cannot do is undertake any activity that would require access to the hardware or the file system.

Role of SQL Azure database administrator

The SQL Azure database administrator administers and manages schema generation, statistics management, index tuning, query optimization, as well as security (users, logins, roles, and so on). Since the physical file system cannot be accessed by the user, tasks such as backing up and restoring databases are not possible. Looking at questions and concerns raised by users in forums, this appears to be one of the less appealing features of SQL Azure that has often resulted in remarks that ‘it is not enterprise ready’. Users want to keep a copy of the data, and if it is a very large database, the advantages of not having servers on the site disappear as you do need a server on-site to back up the data. One suggested recommendation by Microsoft is to use SQL Server Integration Services and bulk copying of data using the SQLCMD utility.

SQL Azure databases

These databases are no different from those of on-site SQL Server 2008 except that the user database node may not have all the nodes of a typical user database that you find in the on-site server. The nodes Database Diagrams, Service Broker, and Storage will be absent as these are not supported. In the case of the system database node, only the master will be present. The master in SQL Azure is a database that contains all information about the other databases.

You can only access the SQL Server with SQL Server Authentication, whereas you have an additional option, Windows Authentication in the case of an on-site SQL Server. All the allowed DDL, DML operations can be programmed using templates available in SSMS. Some of the more common ones, as well as access to the template explorer, which provides a more complete list, are detailed later in the chapter.

User administration and logins

Security is a very important aspect of database administration and it is all the more important in the case of the multi-tenant model used in hosting SQL Azure to control access.

The server-level administrator created in the portal is the top level administrator of SQL Azure Server. While he  can create other databases in the portal, he will have to create other database objects including users and their login, using the SSMS.

Server-level administration

The master database is used to perform server-level administration, as the master database keeps records of all logins and of the logins that have permission to create a database. You must first establish a connection to the master database while creating a New Query to carry out tasks to CREATE, ALTER, or DROP LOGINS or DATABASES. The server-related views: sys.sql_logins and sys.databases can be used to review logins and databases. Whenever you want to change the context of a database, you have to login to the database using the Options in the SSMSs UI, Connect to Server.

Creating a database using T-SQL is extremely simple as there are no file references to be specified and certain other features that are not implemented. The following syntax is for creating a database in an on-site SQL Server instance:

CREATE DATABASE database_name

[ON

[ PRIMARY ] [ <filespec> [ ,…n ]

[ , <filegroup> [ ,…n ] ]

[ LOG ON { <filespec> [ ,…n ] } ]

]

[ COLLATE collation_name ]

[ WITH <external_access_option> ]

]

[;]

To attach a database

CREATE DATABASE database_name

ON <filespec> [ ,…n ]

FOR { ATTACH [ WITH <service_broker_option> ]

| ATTACH_REBUILD_LOG }

[;]

<filespec> ::=

{

(

NAME = logical_file_name ,

FILENAME = { ‘os_file_name’ | ‘filestream_path’ }

[ , SIZE = size [ KB | MB | GB | TB ] ]

[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]

[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB | % ] ]

) [ ,…n ]

}

<filegroup> ::=

{

FILEGROUP filegroup_name [ CONTAINS FILESTREAM ] [ DEFAULT ]

<filespec> [ ,…n ]

}

<external_access_option> ::=

{

[ DB_CHAINING { ON | OFF } ]

[ , TRUSTWORTHY { ON | OFF } ]

}

<service_broker_option> ::=

{

ENABLE_BROKER

| NEW_BROKER

| ERROR_BROKER_CONVERSATIONS

}

Create a database snapshot

CREATE DATABASE database_snapshot_name

ON

(

NAME = logical_file_name,

FILENAME = ‘os_file_name’

) [ ,…n ]

AS SNAPSHOT OF source_database_name

[;]

Look how simple the following syntax is for creating a database in SQL Azure:

CREATE DATABASE database_name

[(MAXSIZE = {1 | 10} GB )]

[;]

However, certain default values are set for the databases, which can be reviewed by issuing the query after creating the database:

SELECT * from sys.databases

Managing logins

After logging in as a server-level administrator to master, you can manage logins using CREATE LOGIN, ALTER LOGIN, and DROP LOGIN statements. You can create a password by executing the following statement for example, while connected to master:

CREATE LOGIN xfiles WITH PASSWORD = ‘@#$jAyRa1’

You need to create a password before you proceed further. During authentication, you will normally be using Login Name and Password, but due to the fact that some tools implement TDS differently, you may have to append the servername part of the fully qualified server name <servername>.<database name>.<windows>.<net> to the Username like in login_name@<servername>. Note that both <login_name> and <login_name>@<servername> are valid in the Connect to Server UI of SSMS.

Connecting to SQL Azure using new login

After creating a new login as described here, you must confer database-level permissions to the new login to get connected to SQL Azure. You can do so by creating users for the database with the login.

Logins with server-level permissions

The roles loginmanager and dbmanager are two security-related roles in SQL Azure to which users may be assigned, that allows them to create logins or create databases. Only the server-level principal (created in the portal) or users with loginmanager role can create logins. The dbmanager role is similar to the dbcreator role and users in this role can create databases using the CREATE DATABASE statement while connected to the master database.

These role assignments are made using the stored procedure sp_addrolemember as shown here for users, user1 and user2. These users are created while connected to master using, for example:

CREATE USER User1 FROM LOGIN ‘login1’;

CREATE USER User2 FROM LOGIN ‘login1’;

EXEC sp_addrolemember ‘dbmanager’, ‘User1’;

EXEC sp_addrolemember ‘loginmanager’, ‘User2’;

Migrating databases to SQL Azure

As most web applications are data-centric, SQL Azure’s databases need to be populated with data before the applications can access the data. More often, if you are trying to push all of your data to SQL Azure, you need tools. You have several options, such as using scripts, migration wizard, bulk copy (bcp.exe), SQL Server Integration Services, and so on. More recently (April 19, 2010 update) Data-tier applications were implemented for SQL Azure providing yet another option for migrating databases using both SSMS as well as Visual Studio.

Troubleshooting

There may be any number of reasons why interacting with SQL Azure may not always be successful. For example, there may just be a possibility that the service level agreement that assures 99.99 percent may not actually be possible, there may be a problem of time-out that is set for executing a command, and so on. In these cases, troubleshooting to find out what might have happened becomes important. Herein, we will see some of the cases that prevent interacting with SQL Azure and the ways and means of troubleshooting the causes.

• Login failure is one of the common problems that one faces in connecting to SQL Azure. In order to successfully login:

°°You need to make sure that you are using the correct SSMS.

°°Make sure you are using SQL Server Authentication in the Connect to Server dialog box.

°°You must make sure your login name and password (type in exactly as you were given by your administrator) are correct. Password is case sensitive. Sometimes you may need to append server name to login name.

°°If you cannot browse the databases, you can type in the name and try.

If your login is not successful, either there is a problem in the login or the database is not available.

If you are a server-level administrator you can reset the password in the portal. For other users the administrator or loginmanager can correct the logins.

• Service unavailable or does not exist.

If you have already provisioned a server, check the following link: http:// http://www.microsoft.com/windowsazure/support/status/servicedashboard. aspx, to make sure SQL Azure Services are running without problem at the data center.

Use the same techniques that you would use in the case of SQL Server 2008 with network commands like Ping, Tracert, and so on. Use the fully qualified name of the SQL Azure Server you have provisioned while using these utilities.

• You assume you are connected, but maybe you are disconnected.

You may be in a disconnected state for a number of reasons, such as:

°°When a connection is idle for an extended period of time

°°When a connection consumes an excessive amount of resources or holds onto a transaction for an extended period of time

°°If the server is too busy

Try reconnecting again. Note that SQL Azure error messages are a subset of SQL error messages.

T-SQL support in SQL Azure

Transact-SQL is used to administer SQL Azure. You can create and manage objects as you will see later in this chapter. CRUD (create, read, update, delete) operations on the table are supported. Applications can insert, retrieve, modify, and delete data by interacting with SQL Azure using T-SQL statements.

As a subset of SQL Server 2008, SQL Azure supports only a subset of T-SQL that you find in SQL Server 2008.

The supported and partially supported features from Microsoft documentation are reproduced here for easy reference.

The support for Transact-SQL reference in SQL Azure can be described in three main categories:

• Transact-SQL language elements that are supported as is

• Transact-SQL language elements that are not supported

• Transact-SQL language elements that provide a subset of the arguments and options in their corresponding Transact-SQL elements in SQL Server 2008

The following Transact-SQL features are supported or partially supported by SQL Azure:

• Constants

• Constraints

• Cursors

• Index management and rebuilding indexes

• Local temporary tables

• Reserved keywords

• Stored procedures

• Statistics management

• Transactions

• Triggers

• Tables, joins, and table variables

• Transact-SQL language elements

• Create/drop databases

• Create/alter/drop tables

• Create/alter/drop users and logins

• User-defined functions

• Views

The following Transact-SQL features are not supported by SQL Azure:

• Common Language Runtime (CLR)

• Database file placement

• Database mirroring

• Distributed queries

• Distributed transactions

• Filegroup management

• Global temporary tables

• Spatial data and indexes

• SQL Server configuration options

• SQL Server Service Broker

• System tables

• Trace flags

T-SQL grammar details are found here: http://msdn.microsoft.com/en-us/ library/ee336281.aspx.

 

Posted in Azure | Tagged: , , , , , , | 1 Comment »

Failover clustering, network load balancing drive high availability

Posted by Alin D on January 25, 2011

Most of your customers know business productivity and revenues can be drastically affected if a mission-critical server, application or service fails. Indeed, one of the main objectives for IT departments everywhere is providing high availability for mission-critical resources. Toward that goal, service providers can implement high-availability alternatives in Windows Server 2008 to mitigate server outages for their Windows shop customers.

The first step in designing a Windows-based high-availability solution entails understanding the two main high-availability alternatives available with Windows Server 2008; failover clustering and network load balancing. These options tackle high availability in different ways.

Failover clustering

At the macro level, a Windows Server 2008 failover cluster provides high availability by eliminating the threat of a single point of failure for a server, application or service. Normally, if a server with a particular application or service crashes, the application or service is unavailable until an administrator manually rectifies the problem. But if a clustered server crashes, another server within the cluster will automatically take over the failed server’s application and service responsibilities without intervention from an administrator or impact on operations.

Windows Server 2008 supports the shared-nothing cluster model, in which two or more independent servers, or nodes, share resources; each server owns and is responsible for managing its local resources and provides nonsharing services. In case of a node failure, the disks, resources and services running on the failed node fail over to a surviving node in the cluster. For example, if an Exchange server is operating on node 1 of the cluster and it crashes, the Exchange application and services will automatically fail over to node 2 of the cluster. This model minimizes server outage and downtime. Only one node manages one particular set of disks, cluster resources and services at any given time.

When designing and implementing failover clusters, service providers need to ensure the following preconditions: that each server’s hardware specifications are identical, that a shared storage server such as a SAN or NAS is in place, and that a dedicated network for heartbeat communication between server nodes is available. In addition, all hardware and software drivers associated with the cluster must be certified by Microsoft, and the customer must use either the Enterprise or Data Center Edition of Windows Server 2008. Those editions support as many as 16 nodes in a single failover cluster implementation.

Network load balancing
Network load balancing (NLB), Windows Server 2008’s other high-availability alternative, enables an organization to scale server and application performance by distributing TCP/IP requests to multiple servers, also known as hosts, within a server farm. This scenario optimizes resource utilization, decreases computing time and ensures server availability. Typically, service providers should consider network load balancing if their customer situation includes, but is not limited to, Web server farms, Terminal Services farms, media servers or Exchange Outlook Web Access servers.

Above and beyond providing scalability by distributing TCP/IP traffic among servers participating in a farm, NLB also ensures high availability by identifying host failures and automatically redistributing traffic to the surviving hosts.

Network load balancing is native to all editions of Windows Server 2008. Unlike failover clustering, NLB does not require any special hardware, and a network load balancing server farm can include as many as 32 nodes. When designing and implementing NLB server farms, it’s common to start off with two servers for scalability and high availability and then add additional nodes to the farm as TCP/IP traffic increases.

Clearly, failover clustering and network load balancing with Windows Server 2008 provide service providers with options when designing and implementing high availability for their customers’ mission-critical servers and applications. Through the use of failover clustering and network load balancing, customers will gain an increase in server availability to mission-critical servers, a decrease in downtime during routine maintenance, a decrease in server outages, and a minimization of end-user outages during a failover.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

Active Directory Rights Management Services (AD RMS)

Posted by Alin D on January 19, 2011

Active Directory Rights Management Services (AD RMS) is an information protection technology that works with AD RMS-enabled applications to help safeguard digital information from unauthorized use. Content owners can define who can open, modify, print, forward, or take other actions with the information.

Introduction

Your organization’s overall security strategy must incorporate methods for maintaining security, protection, and validity of company data and information. This includes not only controlling access to the data, but also how the data is used and distributed to both internal and external users. Your strategy may also include methods to ensure that the data is tamperresistant and that the most current information is valid based on the expiration of outdated or time-sensitive information.
AD RMS enhances your organization’s existing security strategy by applying persistent usage policies to digital information. A usage policy specifies trusted entities, such as individuals, groups of users, computers, or applications. These entities are only permitted to use the
information as specified by the rights and conditions configured within the policy. Rights can include permissions to perform tasks such as read, copy/paste, print, save, forward, and edit. Rights may also be accompanied by conditions, such as when the usage policy expires for a
specific entity. Usage policies remain with the protected data at all times to protect information stored within your organization’s intranet, as well as information sent externally via e-mail or transported on a mobile device.

AD RMS Features

An AD RMS solution is typically deployed throughout the organization with the goal of protecting sensitive information from being distributed to unauthorized users. The addition of AD RMS–enabled client applications such as the 2007 Office system or AD RMS–compatible server roles such as Exchange Server 2007 and Microsoft Office SharePoint Server 2007 provides an overall solution for the following uses:

Enforcing document rights

Every organization has documents that can be considered sensitive information. Using AD RMS, you can control who is able to view these sensitive files and prevent readers from accessing selected application functions, such as printing, saving, copying, and pasting. If a group of employees is collaborating on a document and frequently updating it, you can configure and apply a policy that includes an expiration date of document rights for each published draft. This helps to ensure that all
involved parties are using only the latest information—the older versions will not open after they expire.

Protecting e-mail communication

Microsoft Office Outlook 2007 can use AD RMS to prevent an e-mail message from being accidentally or intentionally mishandled. When a
user applies an AD RMS rights policy template to an e-mail message, numerous tasks can be disabled, such as forwarding the message, copying and pasting content, printing, and exporting the message.

Depending on your security requirements, you may have already implemented a number of technologies to secure digital content. Technologies such as Access Control Lists (ACLs), Secure Multipurpose Internet Mail Extensions (S/MIME), or the Encrypted File System (EFS) can all be used to help secure e-mail and company documents. However, AD RMS still provides additional benefits and features in protecting the confidentiality and use of the data stored within the documents.

Active Directory Rights Management Services Components

The implementation of an AD RMS solution consists of several components, some of which are optional. The size of your organization, scalability requirements, and data sharing requirements all affect the complexity of your specific configuration.

Figure 1

AD RMS Root Cluster

The AD RMS root cluster is the primary component of an RMS deployment and is used to manage all certification and licensing requests for clients. There can be only one root cluster in each Active Directory forest that contains at least a single Windows Server 2008 server that runs the AD RMS server role. You can add multiple servers to the cluster to be used for redundancy and load balancing. During initial installation, the AD RMS root cluster performs an automatic enrollment that creates and signs a server licensor certificate (SLC). The SLC is
used to grant the AD RMS server the ability to issue certificates and licenses to AD RMS clients. In previous versions of RMS, the SLC had to be signed by the Microsoft Enrollment Service over the Internet. This required Internet connectivity from either the RMS server or from another computer to be used for offline enrollment of the server. Windows Server 2008 AD RMS has removed the requirement to contact the Microsoft Enrollment Service. Windows Server 2008 includes a server self-enrollment certificate that is used to sign the SLC locally. This removes the previous requirement for an Internet connection to complete the RMS
cluster enrollment process.

Web Services

Each server that is installed with the AD RMS server role also requires a number of Webrelated server roles and features. The Web Server (IIS) server role is required to provide most of the AD RMS application services, such as licensing and certification. These IIS-based services are called application pipelines. The Windows Process Activation Service and Message Queuing features are also required for AD RMS functionality. The Window Process Activation Service is used to provide access to IIS features from any application that hosts Windows Communication Foundation services. Message Queuing provides guaranteed message delivery between the AD RMS server and the SQL Server database. All transactions are first written to the message queue and then transferred to the database. If connectivity to the database is lost, the transaction information will be queued until connectivity  resumes.
During the installation of the AD RMS server role, you specify the Web site on which the AD RMS virtual directory will be set up. You also provide the address used to enable clients to communicate with the cluster over the internal network. You can specify an unencrypted URL, or you can use an SSL certificate to provide SSL-encrypted connections to the cluster.

Licensing-only Clusters

A licensing-only cluster is optional and is not part of the root cluster; however, it relies on the root cluster for certification and other services (it cannot provide account certification services on its own). The licensing-only cluster is used to provide both publishing licenses and use licenses to users. A licensing-only cluster can contain a single server, or you can add multiple servers to provide redundancy and load balancing. Licensing-only clusters are typically deployed to address specific licensing requirements, such as supporting unique rights management
requirements of a department or supporting rights management for external business partners as part of an extranet scenario.

Database Services

AD RMS requires a database to store configuration information, such as configuration settings, templates, user keys, and server keys. Logging information is also stored within the database. SQL Server is also used to keep a cache of expanded group memberships obtained from Active Directory to determine if a specific user is a member of a group. For production environments, it is recommended that you use a database server such as SQL Server 2005 or later. For test environments, you can use an internal database that is provided with Windows Server 2008; however, the internal database only supports a single-server root cluster.

How AD RMS Works

Server and client components of an AD RMS solution use various types of eXtensible rights Markup Language (XrML)–based certificates and licenses to ensure trusted connections and protected content. XrML is an industry standard that is used to provide rights that are linked to the use and protection of digital information. Rights are expressed in an XrML license attached to the information that is to be protected. The XrML license defines how the information owner wants that information to be used, protected, and distributed.

AD RMS Deployment Scenarios

To meet specific organizational requirements, AD RMS can be deployed in a number of different scenarios. Each of these scenarios offers unique considerations to ensure a secure and effective rights-management solution. These are some possible deployment scenarios:

■ Providing AD RMS for the corporate intranet
■ Providing AD RMS to users over the Internet
■ Integrating AD RMS with Active Directory Federation Services

Deploying AD RMS within the Corporate Intranet

A typical AD RMS installation takes place in a single Active Directory Forest. However, there may be other specific situations that require additional consideration. For example, you may need to provide rights-management services to users throughout a large enterprise with multiple branch offices. For scalability and performance reasons, you might choose to implement licensing-only clusters within these branch offices. You may also have to deploy an AD RMS solution for an organization that has multiple Active Directory forests. Since each
forest can only contain a single root cluster, you will have to determine appropriate trust policies and AD RMS configuration between both forests. This will effectively allow users from both forests to publish and consume rights-management content.

Deploying AD RMS to Users over the Internet

Most organizations have to support a mobile computing workforce, which consists of users that connect to organizational resources from remote locations over the Internet. To ensure that mobile users can perform rights-management tasks, you have to determine how to
provide external access to the AD RMS infrastructure. One method is to place a licensing-only server within your organization’s perimeter network. This will allow external users to obtain use and publishing licenses for protecting or viewing information. Another common solution
is to use a reverse proxy server such as Microsoft Internet Security and Acceleration (ISA) Server 2006 to publish the extranet AD RMS cluster URL. The ISA server will then handle all requests from the Internet to the AD RMS cluster and passes on the requests when necessary. This is a more secure and effective method, so it is typically recommended over
placing licensing servers within the perimeter network location.

Deploying AD RMS with Active Directory Federation Services

Windows Server 2008 includes the Active Directory Federation Services (AD FS) server role, which is used to provide trusted inter-organizational access and collaboration scenarios between two organizations. AD RMS can take advantage of the federated trust relationship as a basis for users from both organizations to obtain RAC, use, and publishing licenses. In order to install AD RMS support for AD FS, you will need to have already deployed an AD FS solution within your environment. This scenario is recommended if one organization has AD RMS and the other does not. If both have AD RMS, trust policies are typically recommended.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , | Leave a Comment »

SharePoint Performance Tuning

Posted by Alin D on November 1, 2010

I was recently looking for the SharePoint 2010 performance articles on Microsoft pages and established blogs, and found that most of them weren’t covering all the details. Some of them simply described MS SQL-based tips, , some were straight system related, and it was extremely hard to find useful SharePoint-based performance tips. I’ve decided to try to fix this hole and provide you all the SharePoint performance steps and details I know in one place .

SharePoint Hardware Planning

Before you even start thinking about improving your performance, keep in mind that even best tips won’t help you if your hardware is simply too weak to handle SharePoint environment.

This article is not intended to explain how to plan your hardware environment, but the only detail that is worth mentioning is that you should know the future details of your SharePoint farm BEFORE you buy the hardware, such as:

  • Total number of SharePoint farm users
  • Simultaneous number of SharePoint farm users
  • Services that would be provided (Search, FAST Search Server, Office Web Access, Visio Services etc. may decrease performance so you probably need to provide dedicated hardware for this)
  • Amount of data that will be stored and processed by the SharePoint farm on a  daily/weekly/monthly basis.

Knowing the above, you can probably design your infrastructure successfully and be happy with the performance of your SharePoint farm after the deployment.

Note: a useful tool to plan your infrastructure (if you know the above details) is HP Sizer for Microsoft SharePoint, which can be accessed athttp://h20338.www2.hp.com/activeanswers/Secure/548230-0-0-0-121.html .

SharePoint Front End Caching

With the SharePoint Server 2010ships with strong caching capabilities, like BLOB (BinaryLarge OBject) cache, profiled cache and object cache. We’ll start with the BLOB.

BLOB cache is disk-based caching that highly increases browser performance and reduces database loads, since the SharePoint reads cached content from BLOB files, instead of databases.  When you open a web page for first time, the files will be copied from the database to the cache on the hard drive on SharePoint server and then all subsequent  requests to this site will be accessed from the local hard drive cache instead of issuing a resource intensive request to the SQL Server database.

To enable the BLOB cache for a Web Application of your choice, you need to edit web.config file. Access your IIS Manager on the Front-End Server where your web application is, and use Explore option to find where it is located on the hard drive (usually C:inetpubwwwrootwss…).

SharePoint Performance

IIS Manager Explore option for application SharePoint – 80

Next, open the web.config file with your favorite text editor (notepad will be sufficient for this).

SharePoint Performance

Web.config file in the application root directory

Now, find the line starting with:

<BlobCache location=

and set the properties correctly. We need to set the cache directory and change the “enable” attribute to “true”. It is strongly recommended to store the cache on a dedicated partition, which isn’t a part of the operating system (C: partition is not recommended). This is why I’ve stored my cache on D: partition.

<BlobCache location="D:BlobCache14" path=".(gif|jpg|jpeg|jpe|jfif|bmp|dib|tif|tiff|ico|png|wdp|hdp|css|js|asf|avi|flv|m4v|
mov|mp3|mp4|mpeg|mpg|rm|rmvb|wma|wmv)$" maxSize="10" enabled="true" />

In the path attribute, you can add or remove file extensions that will be cached. The maxSizeis used for changing the maximum size of the cache on your hard drive in gigabytes (GB), the default maximum size is 10GB.

To configure cache profiles, we will also use web.config file. This will allow us to override the user interface cache profile settings, so we have full control over the process. To use the cache profiles, site collections must have the publishing feature enabled first.

To enable cache profiles, find the line in web.config:

<OutputCacheProfiles

and set the attributes of this tag appropriately:

useCacheProfileOverrides=”false” : change this to “true” to enable overriding the cache profile settings.

Next three attributes (varyByHeader, varyByParam and varyByCustom) define custom parameters in the .NET Framework Class library – we don’t need to change these so the default settings are fine. The varyByRights attribute removes the requirement for identical effective permissions on all securable objects within the cached pages of any other users. Change this value to “false”.

The cacheForEditRights attribute bypasses the default behavior of the page caching per user. Change this attribute to “true”.

The final result of the modified output cache profiles line in web.config should be similar to this:

<OutputCacheProfiles useCacheProfileOverrides="true" varyByHeader="" varyByParam="*"
 varyByCustom="" varyByRights="false" cacheForEditRights="true" />

Next we need to configure the  Object Cache. Object cache settings can be altered at the site collection level using the user interface and this cache is enabled by default. The maximum size of this cache can be configured on the web application level on the Web-Front-End servers (as with  the cache profiles). To use the object cache, the site collections must have publishing feature enabled.

To change Object cache settings, open the  web.config file of our application find the line:

<ObjectCache maxSize

The default value for the maxSize attribute is 100, which means 100 megabytes (MB) will be used for entire web application for object caching. You should modify this value to  use most of your physical memory on the front-end server. If you see that a server consistently has more than 30% available memory, you can improve the site performance by increasing the maxSize attribute.

That’s all about the SharePoint caching options, which are mostly configured in the web.config file. Now, when we have BLOB cache enabled, cache profiles and object cache tweaked to fully use our hardware we can move on to tweaking the performance SharePoint authentication which will be the focus of part 2 of the SharePoint performance series.

Enabling Kerberos Authentication

If your sites are serving numerous requests at a time, and you are experiencing a slow page load, you should consider switching the site-level authentication from NTLM to Kerberos. Whilst NTLM is good for small or medium sized sites, Kerberos is useful when your environment requires high workload and needs to process a large number of requests. Using NTLM, authentication requests aren’t cached and they need to go to the domain controller every time a request is made to an object which is a performance drag. With Kerberos authentication,  requests can be cached, so the process won’t have to communicate with the domain controller to retrieve the object from the site this can dramatically improve SharePoint performance.

To enable Kerberos authentication for your web application, we’ll have to specify the application pool identity and then create a new SPN using the setspn.exe tool.

Go to the IIS Manager on the web server server, and select the website where you want to enable Kerberos authentication (1), using the left pane. Then go into the Authentication Icon, select Windows Authentication (2) (which should be enabled) and click on Advanced Settings (3). You need to make sure that the “Enable Kernel-mode authentication” option is checked (4), checking this option will perform an IIS Reset before resuming.

SharePoint performance

Enabling Kernel Mode Authentication in IIS Manager

Next, we need to run appcmd and set the useAppPoolCredentials attribute to true for our web application (SharePoint – 80). You need to run cmd console in administrator mode if your server has User Account Control enabled. The appcmd tool can be accessed fromC:WindowsSystem32inetsrv folder.

Now, execute a command:

Appcmd set config “SharePoint – 80” /section:windowsauthentication /useAppPoolCredentials:true /commit:MACHINE/WEBROOT/APPHOST

SharePoint performance

CMD console with appcmd command

Now we need to check if the application host configuration is properly configured in order to continue with Kerberos authentication setup. OpenC:WindowsSystem32inetsrvconfigapplicationHost.config and check if our application (SharePoint – 80) has the proper attributes set in the system.webServer section.

My entire SharePoint – 80 entry in the applicationHost.config file is below:

<location path="SharePoint - 80">

<system.webServer>

<handlers accessPolicy="Read, Execute, Script" />

<security>

<authentication>

<windowsAuthentication enabled="true" useKernelMode="true" useAppPoolCredentials="true">

<providers>

<clear />

<add value="NTLM" />

</providers>

<extendedProtection tokenChecking="None" />

</windowsAuthentication>

<anonymousAuthentication enabled="false" />

<digestAuthentication enabled="false" />

<basicAuthentication enabled="false" />

</authentication>

</security>

<urlCompression doStaticCompression="true" doDynamicCompression="true" />

<httpErrors existingResponse="PassThrough" />

<httpProtocol>

<customHeaders>

<clear />

<add value="ASP.NET" />

<add name="MicrosoftSharePointTeamServices" value="14.0.0.4762" />

</customHeaders>

</httpProtocol>

</system.webServer>

</location>



Please note the attributes   bolded above  are the attributes we’ve just set which are required for Kerberos authentication to work properly.

Now perform IISReset /noforce command to reload the changes on the web server. We have only one step left on the backend configuration of Kerberos – we need to set SPN, which is required to map the service and host name to our custom application pool account.

On the Web-Front End server open command prompt with administrative privileges, and execute the command:

Setspn –A http://SiteURL domainapplication_pool_account

It is very important to type in the valid application URL and the domain account that is the identity of the application pool of the site. If you are unsure what the application pool identity is, go to IIS Manager, select Application Pools section in the left pane, and read the account that is running on your application pool (SharePoint – 80 in this example)

SharePoint performance

Application Pools view in IIS Manager

As you can see in our example, the SharePoint – 80 application pool is using account chaosspsadmin, so the command in my environment will be like:

Setspn –A http://sps2010 chaosspsadmin

Now, we should enable the trust for delegation for this account. To do this, go to the Domain Controller and launch Active Directory Users and Computers console, then locate the account (in our example it is chaosspsadmin account) and in the properties of the account, select the Delegation tab and then select “Trust this user for delegation to any service (Kerberos Only)” option.

Note, that you won’t see the Delegation tab if you have missed a step or made a mistake during the configuration using setspn command for the application pool identitity.

Now the last Kerberos step – we need to enable Kerberos on the Web Application itself. To do this, launch Central Administration, select Application Management – Manage Web Applications, and mark our web application (SharePoint – 80). You should now see in the ribbon the Authentication Providers icon – click on it.

SharePoint Performance 2

Central Administration – Authentication Providers icon in the ribbon

Select the correct zone for your web application where we’ll be enabling Kerberos authentication (by default it is Default zone) and in the IIS Authentication settings change the radio button from NTLM to Negotiate (Kerberos).

SharePoint Performance 2

Authentication for the application changed from NTLM to Kerberos

We’ve spent quite some time configuring Kerberos, but believe me – it is worth the time consumed, especially in larger environments, where you’ll probably need to tweak performance ratings in the first place.

Application Pool Recycling.

There’s not so much to configure, but a lot to explain in this section. It is very important to tweak the application pool recycling to suit your farm infrastructure and server architecture. It is best to recycle the pools at night, when your sites has the lowest user traffic. If you have multiple load balancing servers, it’s strongly recommended to turn of the recycling server from the Load Balancer, or you’ll experience poor performance during the process. Since SharePoint Server 2010, which requires 64-bit environment, you can forget about maximum memory based limits since this is managed by the IIS Server itself.

SharePoint Performance 2

Application Pool recycling settings

Checked Out Pages

If your sites are using Enterprise Content Management and Check-In/Check-Out functionality, you should never leave sites checked out, because this decreases the page rendering performance visibly to the users. Instead, check them in as quickly as possible to avoid slower performance.

Now we have looked at most of the front-end SharePoint performance settings. In Part 3 we will look at some of the back-end performance tuning.

Related Articles:

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »