Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘IPv6’

Debugging Tools in Windows Server for TCP/IP – Ping, Tracert and Pathping

Posted by Alin D on February 7, 2011

TCP/IP is the backbone for communication and transportation in Windows Server, prior to  communicating between machines, TCP/IP will need to first be configured. TCP/IP is installed by default in  Windows Server 2008 R2 and during the operating system installation you can also add or remove TCP/IP . If a TCP/IP connection should fails, you will need to identify the cause and point of failure. Windows Server ships with several useful tools which can troubleshoot connections and also verify connectivity. In this series of articles we will look at Ping, Tracert, Pathping, IPconfig, Arp, Netstat, Route, Nslookup and DCDiag.  Most of the tools are been updated to include switches both  for IPv4 and IPv6.

Ping

Ping stands for Packet Internet Groper and can be used to send an ICMP  (Internet Control Message Protocol) echo request and echo reply which will verify the availability of local or remote machines. Ping can be thought of as a utility which sends a message to another machine requesting a confirmation if the machine is still there. By default,  Ping sends four ICMP packages and awaits for the responses back in one second. This default setting can however be changed and the number of packages sent and the await time for responses can be altered through the options available for Ping.
As well as verifying the availability of  remote machines, Ping can assist in  determining name resolution issues. To use Ping, go to a command prompt and enter Ping Targetname. Several different parameters are available to be used with Ping. To show all the parameters enter Ping /? or Ping (with no parameters). The parameters for use with the Ping command are as below:

  • -4 : Specifies that IPv4 should be used to ping, this  is not required for identifying the target machine with a IPv4 address but it will be required only to identify the target machine by name.
  • -6 : Specifies that IPv6 should be used to ping, similar to –4 this is not required for identifying the target machine with an IPv6 address but it will be required only to identify the target machine by name.
  • -a : Resolves the IP address to the hostname which is displayed if this command is successful.
  • -f : Requests that the echo back messages are sent with a  Don’t Fragment flag in packets (only available in IPv4).
  • -i ttl : Increases the timeout when using slow connections, also sets the value of TTL (Time to Live) the max value for this is 255.
  • -j HostList : Routes the packets using the host list (this is a listing of IP addresses which are separated by spaces), hosts can be separated by intermediate gateways (ie loose source route).
  • -k HostList : Similar to –j but the hosts can’t be separated by intermediate gateways (ie strict source route).
  • -l size : Specifies the length (in bytes) of the packets – default is 32 and the max is 65,527.
  • -n count : Specifies the number of packets which are sent – default is 4.
  • -r count : Specifies the route for the outgoing and the incoming packets, you can specify a count which is equal to or higher than the number of hops between source and destination. The count must be between 1 to 9.
  • -R : Specifies that the round-trip path should be traced (this is only available on IPv6).
  • -s count : Sets a time stamp for the number of hops specified by count, this count needs to be between 1 and 4.
  • -S SrcAddr : Sets the source address  (this is only available on IPv6).
  • -t : Specifies that Ping should continue sending packets to the destination until interrupted. To stop and display statistics, press Ctrl+Break. To stop and quit PING, press Ctrl+C.
  • -v TOS : Sets the value of the type of service in the packet sent (default for this setting is zero). TOS is specified by a decimal between 0 and 255.
  • -w timeout : Sets the time in milliseconds for the packet timeout. If the reply isn’t received before a timeout, the Request Timed Out error message will be shown. The default timeout is four seconds.
  • .TargetName : Sets the hostname or IP address of the destination to ping.

Sometimes remote hosts will be configured to ignore all Ping traffic to  prevent acknowledgment  security reasons. Therefore, the inability to ping a server may not always mean the server is not working.

Tracert

Tracert is typically used to determine the path or route taken to a final destination by sending ICMP packets with varying TTL (Time to Live) values. Every router the packet encounters on the way reduces the value of the TTL by at a minimum of one; invariably TTL is a hop count. The path will be determined by looking at the ICMP Time Exceeded messages returned by the intermediate routers. Not all routers will return Time Exceeded messages for expired TTL values and are therefore not captured by the Tracert tool. In these cases, asterisks are shown for that particular hop. To show the different parameters which are available to be used with Tracert, open the command prompt and enter tracert (with no parameters) to show the help or type tracert /?.

The parameters associated with the Tracert tool  are as below:

  • -4 : Specifies  tracert.exe may only use IPv4 for the trace.
  • -6 : Specifies  tracert.exe can only use IPv6 for the trace.
  • -d : Prevents the resolution of the IP addresses of routers to their hostname, this is typically used  speeding up the Tracert results.
  • -h maximumHops : Sets the max number of hops taken before reaching the destination – default is 30 hops.
  • -j HostList : Specifies that packets must use the loose source route option, this allows successive intermediate destinations to be separated by one or more routers. The max number of addresses in the host list is 9. This is only useful only when tracing IPv4 addresses.
  • -R : Sends the packets to the destination in IPv6, using the destination as an intermediate destination and testing reverse route.
  • -S : Specifies which source address to use, this is only useful when tracing IPv6 addresses.
  • -w timeout : Sets the time in milliseconds to wait for the replies.

Tracert is a good utility for determining the number of hops and also the latency of communications between two end-points. Even when using high-speed Internet connections, if the Internet is congested or if the route a packet needs to follow necessitates forwarding the between several routers along the way, the performance and the latency will cause noticeable delays in  communication.

Pathping

The Pathping tool is a route tracing tool which combines features of both the Ping and Tracert commands with some additional information which neither of those two commands provide. Pathping is most suited for a network with routers or multiple routes between  source  and destination hosts. The Pathping command sends out packets to all  routers on its way to a destination, and subsequently gets the results from each packet that is returned from the router. Since Pathping calculates the loss of packets from each hop, it will be easy to determine which router is causing network issues.
To display the parameters in Pathping, open a command prompt and type Pathping /?.
The parameters for the Pathping command are as follows:

  • -4 : Specifies  tracert.exe may only use IPv4 for the trace.
  • -6 : Specifies  tracert.exe can only use IPv6 for the trace.
  • -g Host-list : Allows for the hosts being separated by intermediate gateways.
  • -h maximumHops : Sets the max number of hops prior to reaching a target – default is 30 hops.
  • -i address : Uses a specified source address.
  • -n : Specifies is  unnecessary to resolve the address to the hostname.
  • -p period : Sets the number of seconds to wait between pings – default is 0.25 seconds.
  • -q num_queries : Sets the query number to each host along the route –  default is 3.
  • -w timeout : Sets the timeout for replies in milliseconds.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

PowerShell remoting in Windows Server 2008 R2

Posted by Alin D on January 25, 2011

Remoting

With PowerShell 1.0, one of its major disadvantages was the lack of an interface to execute commands on a remote machine. Granted, you could use Windows Management Instrumentation (WMI) to accomplish this and some cmdlets like Get-Process and Get-Service, which enable you to connect to remote machines. But, the concept of a nativebased “remoting” interface was sorely missing when PowerShell was first released. In fact, the lack of remote command execution was a glaring lack of functionality that needed to be addressed. Naturally, the PowerShell product team took this functionality limitation to heart and addressed it by introducing a new feature in PowerShell 2.0, called “remoting.”

Remoting, as its name suggests, is a new feature that is designed to facilitate command (or script) execution on remote machines. This could mean execution of a command or commands on one remote machine or thousands of remote machines (provided you have the infrastructure to support this). Additionally, commands can be issued synchronously or asynchronously, one at time or through a persistent connection called a runspace, and even scheduled or throttled.

To use remoting, you must have the appropriate permissions to connect to a remote machine, execute PowerShell, and execute the desired command(s). In addition, the remote machine must have PowerShell 2.0 and Windows Remote Management (WinRM) installed, and PowerShell must be configured for remoting.

Additionally, when using remoting, the remote PowerShell session that is used to execute commands determines execution environment. As such, the commands you attempt to execute are subject to a remote machine’s execution policies, profiles, and preferences.

Warning:
Commands that are executed against a remote machine do not have access to information defined within your local profile. As such, commands that use a function or alias defined in your local profile will fail unless they are defined on the remote machine as well.

How Remoting Works

In its most basic form, PowerShell remoting works using the following conversation flow between “a client” (most likely the machine with your PowerShell session) and “a server” (remote host) that you want to execute command(s) against:

  1. A command is executed on the client.
  2. That command is transmitted to the server.
  3. The server executes the command and then returns the output to the client.
  4. The client displays or uses the returned output.

At a deeper level, PowerShell remoting is very dependent on WinRM for facilitating the command and output exchange between a “client” and “server.” WinRM, which is a component of Windows Hardware Management, is a web-based service that enables administrators to enumerate information on and manipulate a remote machine. To handle remote sessions, WinRM was built around a SOAP-based standards protocol called WS-Management. This protocol is firewall-friendly, and was primarily developed for the exchange of management information between systems that might be based on a variety of operating systems on various hardware platforms.

When PowerShell uses WinRM to ship commands and output between a client and server, that exchange is done using a series of XML messages. The first XML message that is exchanged is a request to the server, which contains the desired command to be executed. This message is submitted to the server using the SOAP protocol. The server, in return, executes the command using a new instance of PowerShell called a runspace. Once execution of the command is complete, the output from the command is returned to the requesting client as the second XML message. This second message, like the first, is also communicated using the SOAP protocol.

This translation into an XML message is performed because you cannot ship “live” .NET objects (how PowerShell relates to programs or system components) across the network. So, to perform the transmission, objects are serialized into a series of XML (CliXML) data elements. When the server or client receives the transmission, it converts the received XML message into a deserialized object type. The resulting object is no longer live. Instead, it is a record of properties based on a point in time and, as such, no longer possesses any methods.

Remoting Requirements

To use remoting, both the local and remote computers must have the following:

  • Windows PowerShell 2.0 or later
  • Microsoft .NET Framework 2.0 or later
  • Windows Remote Management 2.0

Note:
Windows Remote Management 2.0 is part of Windows 7 and Windows Server 2008 R2. For down-level versions of Windows, an integrated installation package must be installed, which includes PowerShell 2.0.

Configuring Remoting

By default, WinRM is installed on all Windows Server 2008 R2 machines as part of the default operating system installation. However, for security purposes, PowerShell remoting and WinRM are, by default, configured to not allow remote connections. You can use several methods to configure remoting, as described in the following sections.

Method One The first and easiest method to enable PowerShell remoting is to execute the Enable-PSRemoting cmdlet. For example:

PS C: > enable-pssremoting

Once executed, the following tasks are performed by the Enable-PSRemoting cmdlet:

  • Runs the Set-WSManQuickConfig cmdlet, which performs the following tasks:
    • Starts the WinRM service.
    • Sets the startup type on the WinRM service to Automatic.
    • Creates a listener to accept requests on any IP address.
    • Enables a firewall exception for WS-Management communications.
  • Enables all registered Windows PowerShell session configurations to receive instructions from a remote computer.
  • Registers the “Microsoft.PowerShell” session configuration, if it is not already registered.
  • Registers the “Microsoft.PowerShell32” session configuration on 64-bit computers, if it is not already registered.
  • Removes the “Deny Everyone” setting from the security descriptor for all the registered session configurations.
  • Restarts the WinRM service to make the preceding changes effective.

Note:
To configure PowerShell remoting, the Enable-PSRemoting cmdlet must be executed using the Run As Administrator option.

Method Two The second method to configure remoting is to use Server Manager. Use the following steps to use this method:

  1. Open Server Manager.
  2. In the Server Summary area of the Server Manager home page, click Configure Server Manager Remote Management.
  3. Next, select Enable Remote Management of This Server from Other Computers.
  4. Click OK.

Method Three Finally, the third method to configure remoting is to use GPO. Use the following steps to use this method:

  1. Create a new GPO, or edit an existing one.
  2. Expand Computer Configuration, Policies, Administrative Templates, Windows Components, Windows Remote Management, and then select WinRM Service.
  3. Open the Allow Automatic Configuration of Listeners Policy, select Enabled, and then define the IPv4 filter and IPv6 filter as *.
  4. Click OK.
  5. Next, expand Computer Configuration, Policies, Windows Settings, Security Settings, Windows Firewall with Advanced Security, Windows Firewall with Advanced Security, and then Inbound Rules.
  6. Right-click Inbound Rules, and then click New Rule.
  7. In the New Inbound Rule Wizard, on the Rule Type page, select Predefined.
  8. On the Predefined pull-down menu, select Remote Event Log Management. Click Next.
  9. On the Predefined Rules page, click Next to accept the new rules.
  10. On the Action page, select Allow the Connection, and then click Finish. Allow the Connection is the default selection.
  11. Repeat steps 6 through 10 and create inbound rules for the following predefined rule types:
  • Remote Service Management
  • Windows Firewall Remote Management

Background Jobs

Another new feature that was introduced in PowerShell 2.0 is the ability to use background jobs. By definition, a background job is a command that is executed asynchronously without interacting with the current PowerShell session. However, once the background job has finished execution, the results from these jobs can then be retrieved and manipulated based on the task at hand. In other words, by using a background job, you can complete automation tasks that take an extended period of time to run without impacting the usability of your PowerShell session.

By default, background jobs can be executed on the local computer. But, background jobs can also be used in conjunction with remoting to execute jobs on a remote machine.

Note:
To use background jobs (local or remote), PowerShell must be configured for remoting.

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

Install and Configure Windows Server 2008 DHCP Server

Posted by Alin D on December 8, 2010

Introduction

Dynamic Host Configuration Protocol (DHCP) is a core infrastructure service on any network that provides IP addressing and DNS server information to PC clients and any other device. DHCP is used so that you do not have to statically assign IP addresses to every device on your network and manage the issues that static IP addressing can create. More and more, DHCP is being expanded to fit into new network services like the Windows Health Service and Network Access Protection (NAP). However, before you can use it for more advanced services, you need to first install it and configure the basics. Let’s learn how to do that.

Installing Windows Server 2008 DHCP Server

Installing Windows Server 2008 DCHP Server is easy. DHCP Server is now a “role” of Windows Server 2008 – not a windows component as it was in the past.

To do this, you will need a Windows Server 2008 system already installed and configured with a static IP address. You will need to know your network’s IP address range, the range of IP addresses you will want to hand out to your PC clients, your DNS server IP addresses, and your default gateway. Additionally, you will want to have a plan for all subnets involved, what scopes you will want to define, and what exclusions you will want to create.

To start the DHCP installation process, you can click Add Roles from the Initial Configuration Tasks window or from Server Manager à Roles à Add Roles.

dhcp
Figure 1: Adding a new Role in Windows Server 2008

When the Add Roles Wizard comes up, you can click Next on that screen.

Next, select that you want to add the DHCP Server Role, and click Next.

alt
Figure 2: Selecting the DHCP Server Role

If you do not have a static IP address assigned on your server, you will get a warning that you should not install DHCP with a dynamic IP address.

At this point, you will begin being prompted for IP network information, scope information, and DNS information. If you only want to install DHCP server with no configured scopes or settings, you can just click Next through these questions and proceed with the installation.

On the other hand, you can optionally configure your DHCP Server during this part of the installation.

In my case, I chose to take this opportunity to configure some basic IP settings and configure my first DHCP Scope.

I was shown my network connection binding and asked to verify it, like this:

alt
Figure 3: Network connection binding

What the wizard is asking is, “what interface do you want to provide DHCP services on?” I took the default and clickedNext.

Next, I entered my Parent Domain, Primary DNS Server, and Alternate DNS Server (as you see below) and clicked Next.

alt
Figure 4: Entering domain and DNS information

I opted NOT to use WINS on my network and I clicked Next.

Then, I was promoted to configure a DHCP scope for the new DHCP Server. I have opted to configure an IP address range of 192.168.1.50-100 to cover the 25+ PC Clients on my local network. To do this, I clicked Add to add a new scope. As you see below, I named the Scope WBC-Local, configured the starting and ending IP addresses of 192.168.1.50-192.168.1.100, subnet mask of 255.255.255.0, default gateway of 192.168.1.1, type of subnet(wired), and activated the scope.

alt
Figure 5: Adding a new DHCP Scope

Back in the Add Scope screen, I clicked Next to add the new scope (once the DHCP Server is installed).

I chose to Disable DHCPv6 stateless mode for this server and clicked Next.

Then, I confirmed my DHCP Installation Selections (on the screen below) and clicked Install.

alt
Figure 6: Confirm Installation Selections

After only a few seconds, the DHCP Server was installed and I saw the window, below:

alt
Figure 7: Windows Server 2008 DHCP Server Installation succeeded

I clicked Close to close the installer window, then moved on to how to manage my new DHCP Server.

How to Manage your new Windows Server 2008 DHCP Server

Like the installation, managing Windows Server 2008 DHCP Server is also easy. Back in my Windows Server 2008Server Manager, under Roles, I clicked on the new DHCP Server entry.

alt
Figure 8: DHCP Server management in Server Manager

While I cannot manage the DHCP Server scopes and clients from here, what I can do is to manage what events, services, and resources are related to the DHCP Server installation. Thus, this is a good place to go to check the status of the DHCP Server and what events have happened around it.

However, to really configure the DHCP Server and see what clients have obtained IP addresses, I need to go to the DHCP Server MMC. To do this, I went to Start à Administrative Tools à DHCP Server, like this:

alt
Figure 9: Starting the DHCP Server MMC

When expanded out, the MMC offers a lot of features. Here is what it looks like:

alt
Figure 10: The Windows Server 2008 DHCP Server MMC

The DHCP Server MMC offers IPv4 & IPv6 DHCP Server info including all scopes, pools, leases, reservations, scope options, and server options.

If I go into the address pool and the scope options, I can see that the configuration we made when we installed the DHCP Server did, indeed, work. The scope IP address range is there, and so are the DNS Server & default gateway.

alt
Figure 11: DHCP Server Address Pool

alt
Figure 12: DHCP Server Scope Options

So how do we know that this really works if we do not test it? The answer is that we do not. Now, let’s test to make sure it works.

How do we test our Windows Server 2008 DHCP Server?

To test this, I have a Windows Vista PC Client on the same network segment as the Windows Server 2008 DHCP server. To be safe, I have no other devices on this network segment.

I did an IPCONFIG /RELEASE then an IPCONFIG /RENEW and verified that I received an IP address from the new DHCP server, as you can see below:

alt
Figure 13: Vista client received IP address from new DHCP Server

Also, I went to my Windows 2008 Server and verified that the new Vista client was listed as a client on the DHCP server. This did indeed check out, as you can see below:

alt
Figure 14: Win 2008 DHCP Server has the Vista client listed under Address Leases

With that, I knew that I had a working configuration and we are done!

In Summary

In this article, you learned how to install and configure DHCP Server in Windows Server 2008. During that process, you learned what DHCP Server is, how it can help you, how to install it, how to manage the server, and how to configure DHCP server specific settings like DHCP Server scopes. In the end, we tested our configuration and it all worked! Good luck configuring your Windows Server 2008 DHCP Server!

Posted in Windows 2008 | Tagged: , , , , , , , , , | Leave a Comment »

Windows Server 2008 R2 DNSSEC–Secure DNS Connections

Posted by Alin D on November 26, 2010

Introduction

With the upcoming insurgence of IPv6, accessing computers through DNS names will be more important than ever. While those of us who have been working with IPv4 for many years have found it fairly easy to remember a great number of IPv4 addresses using the dotted quad system of IP network numbering, the fact is that the IPv6 address space is so large, and the hexadecimal format is so complex, that it is likely that only a handful of very dedicated nerds will be able to remember the IP addresses of more than a few computers on their networks. After all, each IPv6 address is 128 bits long – four times as long as an IPv4 address. This is what provides for the much larger address space to accommodate the growing number of hosts on the Internet, but it also makes it more difficult for us to remember addresses.45008_attack_on_resolver

The Problem: Non-secure Nature of the DNS Database

Given the increasing reliance on DNS that is sure to result, we are going to need a way to make sure that the entries in the DNS database are always accurate and reliable – and one of the most effective ways for us to ensure this is to make sure that our DNS databases are secure. Up until recently, DNS had been a relatively non-secure system, with a large number of assumptions made to provide a basic level of trust.

Due to this non-secure nature, there are many high profile instances where the basic trust has been violated and DNS servers have been hijacked (redirecting the resolution of DNS names to rogue DNS servers), DNS records spoofed, and DNS caches poisoned, leading users to believe they are connecting to legitimate sites when in fact they have been led to a web site that contains malicious content or collects their information by pharming. Pharming is similar to phishing, except that instead of following a link in email, users visit the site on their own, using the correct URL of the legitimate site, so they think they’re safe. But the DNS records have been changed to redirect the legitimate URL to the fake, pharming site.

The Solution: Windows Server 2008 R2 DNSSEC

One solution you can use on your intranet to secure your DNS environment is to use the Windows Server 2008 R2 DNSSEC. DNSSEC is a collection of extensions that improve the security of the DNS protocols. These extensions add origin authority, data integrity and authenticated denial of existence to DNS. The solution also adds several new records to DNS, including DNSKEY, RRSIGN, NSEC and DS.

How DNSSEC works

What DNSSEC does is allow all the records in the DNS database to be signed, with a method similar to that used for other digitally signed electronic communications, such as email. When a DNS client issues a query to the DNS server, it returns the digital signatures of the records that it returns. The client, which has the public key of the CA that signed the DNS records, is then able to decrypt the hashed value (signature) and validate the responses. In order to do this, the DNS client and server are configured to use the same trust anchor. A trust anchor is a preconfigured public key associated with a particular DNS zone.

DNS database signing is available for both file based (non-Active Directory integrated) and Active Directory integrated zones, and replication is available to other DNS servers that are authoritative for the zones in question.

The Windows 2008 R2 and Windows 7 DNS clients are configured, by default, as non-validating, security-aware, stub resolvers. When this is the case, the DNS client allows the DNS server to perform validation on its behalf, but the DNS client is able to accept the DNSSEC responses returned from the DNSSEC enabled DNS server. The DNS client itself is configured to use the Name Resolution Policy Table (NRPT) to determine how it should interact with the DNS server. For example, if the NRPT indicates that the DNS client should secure the connection between the DNS client and server, then certificate authentication can be enforced on the query. If security negotiations fail, it is a strong indication that there is a trust issue in the name resolution process, and the name query attempt will fail. By default, when the client returns to the DNS query response to the application that made the request, it will only return this information if the DNS server has validated the information.

Ensuring Valid Results

So there are really two methods that are used to ensure that the results of your DNS queries are valid. First, you need to ensure that the DNS servers that your DNS clients connect to are actually the DNS servers you want the DNS clients to connect to – and that they are not rogue or attacker DNS servers that are sending spoofed responses. IPsec is an effective way to ensure the identity of the DNS server. DNSSEC uses SSL to confirm that the connection is secure. The DNS server authenticates itself via a certificate that is signed by a trusted issuer (such as your private PKI).

Keep in mind that if you have IPsec enforced server and domain isolation in force, you must exempt TCP and UDP ports 53 from the policy. Otherwise, IPsec policy will be used instead of certificate based authentication. This will cause the client to fail certificate validation from the DNS server and the secure connection will not be established.

Signed zones

DNSSEC also signs zones, using offline signing with the dnscmd.exe tool. This results in a signed zone file. The signed zone file contains the RRSIG, DNSKEY, DNS and NSEC resource records for that zone. After the zone is signed, it has to be reloaded using the dnscmd.exe tool or the DNS manager console.

One limitation of signing zones is that dynamic updates are disabled. Windows Server 2008 R2 enables DNSSEC for static zones only. The zone must be resigned each time a change is made to the zone, which may severely limit the utility of DNSSEC in many environments.

The Role of Trust Anchors

Trust anchors were mentioned earlier. DNSKEY resource records are used to support trust anchors. A validating DNS server must include at least one trust anchor. Trust anchors also apply only to the zone that they are assigned. If the DNS server hosts several zones, then multiple trust anchors are used.

The DNSSEC enabled DNS server performs validation for a name in a client query as long as the trust anchor is in place for that zone. The client doesn’t need to be DNSSEC aware for the validation to take place, so that non-DNSSEC aware DNS clients can still use this DNS server to resolve names on the intranet.

NSEC/NSEC3

NSEC and NSEC3 are methods that can be used to provide authenticated denial of existence for DNS records. NSEC3 is an improvement on the original NSEC specification that allows you to prevent “zone walking”, which allows an attacker to retrieve all the names in the DNS zone. This is a powerful tool that attackers can use to reconnoiter your network. This capability is not available in Windows Server 2008 R2, as only support for NSEC is included.

However, there is limited support for NSEC3:

  • Windows Server 2008 R2 can host a zone with NSEC that has NSEC3 delegations. However, the NSEC3 child zones are not hosted on Windows DNS servers
  • Windows Server 2008 R2 can be a non-authoritative DNS server configured with a trust anchor for a zone that is signed with NSEC and has NSEC3 child zones.
  • Windows 7 clients can use a non-Microsoft DNS server for DNS name resolution when that server is NSEC3 aware
  • When a zone is signed with NSEC, you can configure the Name Resolution Policy Table to not require validation for the zone. When you do this, the DNS server will not perform validation and will return the response with the Active Directory bit clear

Deploying DNSSEC

To deploy DNSSEC, you will need to do the following:

  • Understand the key concepts of DNSSEC
  • Upgrade your DNS servers to Windows Server 2008 R2
  • Review zone signing requirements, choose a key rollover mechanism, and identify the secure computers and DNSSEC protected zones
  • Generate and backup the keys that sign your zones. Confirm that DNS is still working and answering queries after signing the zones
  • Distribute your trust anchors to all non-authoritative servers that will perform DNS validation using DNSSEC
  • Deploy certificates and IPsec policy to your DNS server
  • Configure the NRPT settings and deploy IPsec policy to client computers

For more information on deploying a secure DNS designing using Windows Server 2008 R2, go here.

Summary

In this article, we provided a high level overview of DNSSEC and discussed the reasons that securing your DNS infrastructure is important to your organization. Windows Server 2008 R2 introduces new features that help make your DNS infrastructure more secure than ever, through the combined used of signed DNS zones, SSL secured connections to trusted DNS servers, and IPsec authentication and encryption. In a future article, we’ll take apart the DNSSEC solution in more detail and look at the specifics of the new resource records, the signing process, and the client/server interactions that take place between a DNSSEC client and server

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »

Setup FTP 7.5 on Windows Server 2008 and publish through Forefront TMG 2010

Posted by Alin D on November 2, 2010

Introduction

Microsoft has created a new FTP service that has been completely rewritten for Windows Server® 2008. This new FTP service incorporates many new features that enable web authors to publish content better than before, and offers web administrators more security and deployment options.

  • Integration with IIS 7: IIS 7 has a brand-new administration interface and configuration store, and the new FTP service is tightly integrated with this new design. The old IIS 6.0 metabase is gone, and a new configuration store that is based on the .NET XML-based *.config format has taken its place. In addition, IIS 7 has a new administration tool, and the new FTP server plugs seamlessly into that paradigm.
  • Support for new Internet standards: One of the most significant features in the new FTP server is support for FTP over SSL. The new FTP server also supports other Internet improvements such as UTF8 and IPv6.
  • Shared hosting improvements: By fully integrating into IIS 7, the new FTP server makes it possible to host FTP and Web content from the same site by simply adding an FTP binding to an existing Web site. In addition, the FTP server now has virtual host name support, making it possible to host multiple FTP sites on the same IP address. The new FTP server also has improved user isolation, now making it possible to isolate users through per-user virtual directories.
  • Custom authentication providers: The new FTP server supports authentication using non-Windows accounts for IIS Managers and .NET Membership.
  • Improved logging support: FTP logging has been enhanced to include all FTP-related traffic, unique tracking for FTP sessions, FTP sub-statuses, additional detail fields in FTP logs, and much more.
  • New supportability features: IIS 7 has a new option to display detailed error messages for local users, and the FTP server supports this by providing detailed error responses when logging on locally to an FTP server. The FTP server also logs detailed information using Event Tracing for Windows (ETW), which provides additional detailed information for troubleshooting.
  • Extensible feature set: FTP supports extensibility that allows you to extend the built-in functionality that ships with the FTP service. More specifically, there is support for creating your own authentication and authorization providers. You can also create providers for custom FTP logging and for determining the home directory information for your FTP users.

Additional information about new features in FTP 7.5 is available in the “What’s New for Microsoft and FTP 7.5?” topic on Microsoft’s http://www.iis.net/ web site.

This document will walk you through installing the new FTP service and troubleshooting installation issues.

Installing FTP for IIS 7.5

IIS 7.5 for Windows Server 2008 R2

  1. On the taskbar, click Start, point to Administrative Tools, and then click Server Manager.
  2. In the Server Manager hierarchy pane, expand Roles, and then click Web Server (IIS).
  3. In the Web Server (IIS) pane, scroll to the Role Services section, and then click Add Role Services.
  4. On the Select Role Services page of the Add Role Services Wizard, expand FTP Server.
  5. Select FTP Service. (Note: To support ASP.NET Membership or IIS Manager authentication for the FTP service, you will also need to select FTP Extensibility.)
  6. Click Next.
  7. On the Confirm Installation Selections page, click Install.
  8. On the Results page, click Close.

Installing FTP for IIS 7.0

Prerequisites

The following items are required to complete the procedures in this section:

  1. You must be using Windows Server 2008.
  2. Internet Information Services 7.0 must be installed.
  3. If you are going to manage the new FTP server by using the IIS 7.0 user interface, the administration tool will need to be installed.
  4. You must install the new FTP server as an administrator. (See the Downloading and Installing section for more.)
  5. IIS 7.0 supports a shared configuration environment, which must be disabled on each server in a web farm before installing the new FTP server for each node. Note: Shared configuration can be re-enabled after the FTP server had been installed.
  6. The FTP server that is shipped on the Windows Server 2008 DVD must be uninstalled before installing the new FTP server.
Downloading the right version for your server

There are two separate downloadable packages for the new FTP server; you will need to download the appropriate package for your version of Windows Server 2008:

Launching the installation package

You will need to run the installation package as an administrator. This can be accomplished by one of the following methods:

  1. Logging in to your server using the actual account named “Administrator”, then browsing to the download pages listed above or double-clicking the download package if you have saved it to your server.
  2. Logging on using an account with administrator privileges and opening a command-prompt by right-clicking the Command Prompt menu item that is located in the Accessories menu for Windows programs and selecting “Run as administrator”, then typing the appropriate command listed below for your version of Windows to run the installation:
    • 32-bit Windows Versions:
      • msiexec /i FTP 7_x86_75.msi
    • 64-bit Windows Versions:
      • msiexec /i FTP 7_x64_75.msi

Note: One of the above steps is required because the User Account Control (UAC) security component in the Windows Vista and Windows Server 2008 operating systems prevents access to your applicationHost.config file. For more information about UAC, please see the following documentation:

The following steps walk you through all of the required settings to add FTP publishing for the Default Web Site.

Walking through the installation process
  1. When the installation package opens, you should see the following screen. Click Next to continue.
    alt
  2. On the next screen, click the I accept check box if you agree to the license terms, and then click Next.
    alt
  3. The following screen lists the installation options. Choose which options you want installed from the list, and then click Next.
    • Common files: this option includes the schema file. When installing in a shared server environment, each server in the web farm will need to have this option installed.
    • FTP Publishing Service: this option includes the core components of the FTP service. This option is required for the FTP service to be installed on the server.
    • Managed Code Support: this is an optional component, but features that use managed extensibility require this option before using them, such as ASP.NET and IIS manager authentication. Note: This feature cannot be installed on Windows Server 2008 Core.
    • Administration Features: this option installs the FTP 7 management user interface. This requires the IIS 7.0 manager and .NET framework 2.0 to be installed. Note: This feature cannot be installed on Windows Server 2008 Core.
      alt
  4. On the following screen, click Install to begin installing the options that you chose on the previous screen.
    alt
  5. When installation has completed, click Read notes to view the FTP README file, or click Finish to close the installation dialog.
    alt

Note: If an error occurs during installation, you will see an error dialog. Refer to the Troubleshooting Installation Issues section of this document for more information.

Troubleshooting Installation Issues

When the installation of FTP 7 fails for some reason, you should see a dialog with a button called “Installation log”. Clicking the “Installation log” button will open the MSI installation log that was created during the installation. You can also manually enable installation logging by running the appropriate command listed below for your version of Windows. This will create a log file that will contain information about the installation process:

  • 32-bit Windows Versions:
    • msiexec /L FTP 7.log /I FTP 7_x86_75.msi
  • 64-bit Windows Versions:
    • msiexec /L FTP 7.log /I FTP 7_x64_75.msi

You can analyze this log file after a failed installation to help determine the cause of the failure.

Clicking the “Online information” button on the error dialog will launch the “Installing and Troubleshooting FTP 7.5” document in your web browser.

Note: If you attempt to install the downloaded package on an unsupported platform, the following dialog will be displayed:

Known Issues in This Release

The following issues are known to exist in this release:

  1. While Web-based features can be delegated to remote managers and added to web.config files using the new IIS 7 configuration infrastructure, FTP features cannot be delegated or stored in web.config files.
  2. The icon of a combined Web/FTP site may be marked with a question mark even though the site is currently started with no error. This occurs when a site has a mixture of HTTP/FTP bindings.
  3. After adding an FTP publishing to a Web site, clicking the site’s node in the tree view of the IIS 7 management tool may not display the FTP icons. To work around this issue, use one of the following:
    • Hit F5 to refresh the IIS 7 management tool.
    • Click on the Sites node, then double-click on the site name.
    • Close and re-open the IIS 7 management tool.
  4. When you add a custom provider in the site defaults, it shows up under each site. However, if you attempt to remove or modify the settings for a custom provider at the site-level, IIS creates an empty <providers /> section for the site, but the resulting configuration for each site does not change. For example, if the custom provider is enabled in the site defaults, you cannot disable it at the site-level. To work around this problem, open your applicationHost.config file as an administrator and add a <clear/> element to the list of custom authentication providers, the manually add the custom provider to your settings. For example, in order to add the IIS Manager custom authentication provider, you would add settings like the following example:
    <ftpServer>
    <security>
    <authentication>
    <customAuthentication>
    <providers>
    <clear />
    <add name=”IisManagerAuth” enabled=”true” />
    </providers>
    </customAuthentication>
    </authentication>
    </security>
    </ftpServer>
  5. The following issues are specific to the IIS 7.0 release:
    • The FTP service that is shipped on the Windows Server 2008 DVD should not be installed after the new FTP service has been installed. The old FTP service does not detect that the new FTP service has been installed, and running both FTP services at the same may cause port conflicts.
    • IIS 7 can be uninstalled after the new FTP service has been installed, and this will cause the new FTP service to fail. If IIS is reinstalled, new copies of the IIS configuration files will be created and the new FTP service will continue to fail because the configuration information for the new FTP service is no longer in the IIS configuration files. To fix this problem, re-run the setup for the new FTP service and choose “Repair”.

To Add FTP Site from the IIS management Console

Creating a New FTP Site Using IIS 7 Manager

The new FTP service makes it easy to create new FTP sites by providing you with a wizard that walks you through all of the required steps to create a new FTP site from scratch.

Step 1: Use the FTP Site Wizard to Create an FTP Site

In this first step you will create a new FTP site that anonymous users can open.

Note: The settings listed in this walkthrough specify “%SYSTEMDRIVE%inetpubftproot” as the path to your FTP site. You are not required to use this path; however, if you change the location for your site you will have to change the site-related paths that are used throughout this walkthrough.

  1. Open IIS 7 Manager. In the Connections pane, click the Sites node in the tree.
  2. As shown in the image below, right-click the Sites node in the tree and click Add FTP Site, or click Add FTP Site in the Actions pane.
    • Create a folder at “%SystemDrive%inetpubftproot”
    • Set the permissions to allow anonymous access:
      1. Open a command prompt.
      2. Type the following command:
        ICACLS "%SystemDrive%inetpubftproot" /Grant IUSR:R /T
      3. Close the command prompt.

    alt

  3. When the Add FTP Site wizard appears:
    • Enter “My New FTP Site” in the FTP site name box, then navigate to the %SystemDrive%inetpubftproot folder that you created in the Prerequisites section. Note that if you choose to type in the path to your content folder, you can use environment variables in your paths.
    • When you have completed these items, click Next.

    alt

  4. On the next page of the wizard:
    • Choose an IP address for your FTP site from the IP Address drop-down, or choose to accept the default selection of “All Unassigned.” Because you will be using the administrator account later in this walk-through, you must ensure that you restrict access to the server and enter the local loopback IP address for your computer by typing “127.0.0.1” in the IP Address box. (Note: If you are using IPv6, you should also add the IPv6 localhost binding of “::1”.)
    • Enter the TCP/IP port for the FTP site in the Port box. For this walk-through, choose to accept the default port of 21.
    • For this walk- through, do not use a host name, so make sure that the Virtual Host box is blank.
    • Make sure that the Certificates drop-down is set to “Not Selected” and that the Allow SSL option is selected.
    • When you have completed these items, click Next.

    alt

  5. On the next page of the wizard:
    • Select Anonymous for the Authentication settings.
    • For the Authorization settings, choose “Anonymous users” from the Allow access to drop-down, and select Read for the Permissions option.
    • When you have completed these items, click Finish.

    alt

Summary

You have successfully created a new FTP site using the new FTP service. To recap the items that you completed in this step:

  1. You created a new FTP site named “My New FTP Site”, with the site’s content root at “%SystemDrive%inetpubftproot”.
  2. You bound the FTP site to the local loopback address for your computer on port 21, and you chose not to use Secure Sockets Layer (SSL) for the FTP site.
  3. You created a default rule for the FTP site to allow anonymous users “Read” access to the files.
Step 2: Adding Additional FTP Security Settings

Creating a new FTP site that anonymous users can browse is useful for public download sites, but web authoring is equally important. In this step, you add additional authentication and authorization settings for the administrator account. To do so, follow these steps:

  1. In IIS 7 Manager, click the node for the FTP site that you created earlier, then double-click FTP Authentication to open the FTP authentication feature page.
    alt
  2. When the FTP Authentication page displays, highlight Basic Authentication and then click Enable in the Actions pane.
    alt
  3. In IIS 7 Manager, click the node for the FTP site to re-display the icons for all of the FTP features.
  4. You must add an authorization rule so that the administrator can log in. To do so, double-click the FTP Authorization Rules icon to open the FTP authorization rules feature page.
    alt
  5. When the FTP Authorization Rules page is displayed, click Add Allow Rule in the Actions pane.
    alt
  6. When the Add Allow Authorization Rule dialog box displays:
    • Select Specified users, then type “administrator” in the box.
    • For Permissions, select both Read and Write.
    • When you have completed these items, click OK.
      alt
Summary

To recap the items that you completed in this step:

  1. You added Basic authentication to the FTP site.
  2. You added an authorization rule that allows the administrator account both “Read” and “Write” permissions for the FTP site.
Step 3: Logging in to Your FTP Site

In Step 1, you created an FTP site that anonymous users can access, and in Step 2 you added additional security settings that allow an administrator to log in. In this step, you log in anonymously using your administrator account.

Note: In this step log in to your FTP site using the local administrator account. When creating the FTP site in Step 1 you bound the FTP site to the local loopback IP address. If you did not use the local loopback address, use SSL to protect your account settings. If you prefer to use a separate user account instead of the administrator account, set the correct permissions for that user account for the appropriate folders.

Logging in to your FTP site anonymously
  1. On your FTP server, open a command prompt session.
  2. Type the following command to connect to your FTP server:FTP localhost
  3. When prompted for a user name, enter “anonymous”.
  4. When prompted for a password, enter your email address.

You should now be logged in to your FTP site anonymously. Based on the authorization rule that you added in Step 1, you should only have Read access to the content folder.

Logging in to your FTP site using your administrator account
  1. On your FTP server, open a command prompt session.
  2. Type the following command to connect to your FTP server:FTP localhost
  3. When prompted for a user name, enter “administrator”.
  4. When prompted for a password, enter your administrator password.

You should now be logged in to your FTP site as the local administrator. Based on the authorization rule that you added in Step 2 you should have both Read and Write access to the content folder.

Summary

To recap the items that you completed in this step:

  1. You logged in to your FTP site anonymously.
  2. You logged in to your FTP site as the local administrator.

Publish FTP site from Forefront TMG 2010

Let’s begin

Note:
Keep in mind that the information in this article is based on a release candidate version of Microsoft Forefront TMG and is subject to change.

A few months ago, Microsoft released RC 1 (Release Candidate) of Microsoft Forefront TMG (Threat Management Gateway), which has a lot of new exciting features.

One of the new features of Forefront TMG is its ability to allow FTP server traffic through the Firewall in both directions. It does this in the form of Firewall access rules for outbound FTP access and with server publishing rules for inbound FTP access through a published FTP Server. This server is located in your internal network or a perimeter network, also known as a DMZ (if you are not using public IP addresses for the FTP Server in the DMZ).

First, I will show you the steps you will need to follow in order to create a Firewall rule which will allow FTP access for outgoing connections through TMG.

FTP access rule

Create a new access rule which allows the FTP protocol for your clients. If you want to allow FTP access for your clients, the clients must be Secure NAT or TMG clients, also known as the Firewall client in previous versions of Forefront TMG.

Please note:
If you are using the Web proxy client, you should note that through this type of client only FTP read-only access is possible and you cannot use a classic FTP client for FTP access, only a web browser FTP access is possible with some limitations.

The following picture shows a FTP access rule.

alt
Figure 1: FTP access rule

A well-known pitfall beginning with ISA Server 2004 is, that by default, after the FTP access rule has been created, the rule only allows FTP read-only access for security purposes in order to prevent users from uploading confidential data outside the organization without permission. If you want to enable FTP uploads you have to right click on the FTP access rule, and then click Configure FTP.

alt
Figure 2: Configure FTP

All you have to do is remove the read only flag, wait for the new FTP connection to be established, and the users get all the necessary permissions to carry out FTP uploads.

alt
Figure 3: Allow write access through TMG

FTP Server publishing

If you want to allow incoming FTP connections to your internal FTP servers, or to FTP servers located in the DMZ, you have to create server publishing rules if the network relationship between the external and the internal/DMZ network is NAT. If you are using a route network relationship, it is possible to use Firewall rules to allow FTP access.

To gain access to an FTP server in your internal network, create an FTP server publishing rule.

Simply start the new Server Publishing Rule Wizard and follow the instructions.

As the protocol you have to select the FTP Server protocol definition which allows inbound FTP access.

alt
Figure 4: Publish the FTP-Server protocol

The standard FTP Server protocol definiton uses the associated standard protocol which can be used for inspection by NIS, if a NIS signature is available.

alt
Figure 5: FTP-Server protocol properties

The Standard FTP Server protocol definition allows FTP Port 21 TCP for inbound access and the protocol definition is bound to the FTP access filter which is responsible for the FTP protocol port handling (FTP Data and FTP control port).

alt
Figure 6: FTP ports and FTP Access Filter binding

Active FTP

One of the changes in Microsoft Forefront TMG is that the Firewall does not allow Active FTP connections by default anymore, for security reasons. You have to manually allow the use of Active FTP connections. It is possible to enable this feature in the properties of the FTP access filter. Navigate to the system node in the TMG management console, select the Applicaton Filters tab, select the FTP Access filter and in the task pane click Configure Selected Filter (Figure 7).

alt
Figure 7: FTP Access filter properties

In the FTP access filter properties select the FTP Properties tab and enable the checkbox Allow Active FTP Access and save the configuration to the TMG storage.

alt
Figure 8: Allow Active FTP through TMG

FTP alerts

Forefront TMG comes with a lot of predefined alert settings for several components and events. One of them is the alert function for the FTP Filter Initialization Warning. This alert informs Administrator when the FTP filter failed to parse the allowed FTP commands.

alt
Figure 9: Configure FTP alert options

The alert actions are almost the same as in ISA Server 2006, so there are no new things to explain for experienced ISA Administrators.

Conclusion

In this article, I showed you some ways to allow FTP access through the TMG Server. There are some pitfalls for a successful FTP implementation. One of the pitfalls is that since the introduction of ISA Server 2004, allowing FTP write access through the Firewall and the other pitfall is new to Forefront TMG. Forefront TMG does not allow Active Mode FTP connections by default, so you have to manually activate this feature if you really need this type of special configuration.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Enable and configure Windows PowerShell Remoting using Group Policy

Posted by Alin D on October 11, 2010

As you may know, Windows PowerShell 2.0 introduced a new remoting feature, allowing for remote management of computers.

While this feature can be enabled manually (or scripted) with the PowerShell 2.0 cmdlet Enable-PSRemoting, I would recommend using Group Policy whenever possible. This guide will show you how this can be accomplished for Windows Vista, Windows Server 2008 and above. For Windows XP and Windows Server 2003, running Enable-PSRemoting in a PowerShell startup script would be the best approach.

Windows PowerShell 2.0 and WinRM 2.0 shipped with Windows 7 and Windows Server 2008 R2. To take advantage of Windows PowerShell Remoting, both of these are required on the downlevel operating systems Windows XP, Windows Server 2003, Windows Vista and Windows Server 2008. Both Windows PowerShell 2.0 and WinRM 2.0 are available for download here, as part of the Windows Management Framework (Windows PowerShell 2.0, WinRM 2.0, and BITS 4.0). To deploy this update to downlevel operating systems I would recommend to use WSUS, which are described in detail in this blog post.

Group Policy Configuration

Open the Group Policy Management Console from a domain-joined Windows 7 or Windows Server 2008 R2 computer.

Create or use an existing Group Policy Object, open it, and navigate to Computer Configuration->Policies->Administrative templates->Windows Components

Here you will find the available Group Policy settings for Windows PowerShell, WinRM and Windows Remote Shell:

image

To enable PowerShell Remoting, the only setting we need to configure are found under “WinRM Service”, named “Allow automatic configuration of listeners”:

image

Enable this policy, and configure the IPv4 and IPv6 addresses to listen on. To configure WinRM to listen on all addresses, simply use *.

In addition, the WinRM service are by default not started on Windows client operating systems. To configure the WinRM service to start automatically, navigate to Computer ConfigurationPoliciesWindows SettingsSecurity SettingsSystem ServicesWindows Remote Management, doubleclick on Windows Remote Management and configure the service startup mode to “Automatic”:



No other settings need to be configured, however, I`ve provided screenshots of the other settings so you can see what`s available:

image

image

image

image

There is one more thing to configure though; the Windows Firewall.

You need to create a new Inbound Rule under Computer Configuration->Policies->Windows Settings->Windows Firewall with Advanced Security->Windows Firewall with Advanced Security->Inbound Rules:

image

The WinRM port numbers are predefined as “Windows Remote Management”:

image

With WinRM 2.0, the default http listener port changed from TCP 80 to TCP 5985. The old port number are a part of the predefined scope for compatibility reasons, and may be excluded if you don`t have any legacy WinRM 1.1 listeners.

image

image

When the rule are created, you may choose to make further restrictions, i.e. to only allow the IP addresses of your management subnet, or perhaps some specific user groups:

image

Now that the firewall rule are configured, we are done with the minimal configuration to enable PowerShell Remoting using Group Policy.

image

On a computer affected by the newly configured Group Policy Object, run gpupdate and see if the settings were applied:

image

As you can see, the listener indicates “Source*”GPO”, meaning it was configured from a Group Policy Object.

When the GPO have been applied to all the affected computers you are ready to test the configuration.

Here is a sample usage of PowerShell Remoting combined with the Active Directory-module for Windows PowerShell:

image

The example are saving all computer objects in the Domain Controller Organization Unit in a variable. Then, a foreach-loop are invoking a scriptblock, returning the status of the Netlogon-service on all of the Domain Controllers.

Summary

We`ve now had a look on how to enable and configure PowerShell Remoting using Group Policy.
There are an incredible number of opportunities opening up with the new Remoting feature in Windows PowerShell 2.0

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »

Windows 2008 Server Role Servers Explained

Posted by Alin D on October 7, 2010

A server on a network – standalone or member – can function in a number of roles. As the needs of your computing environment change, you may want to change the role of a server. By using the Server Manager and the Add Roles Wizard, you can install Active Directory Domain Servers to promote a member server to a domain controller, or you can install individual roles or combinations of various roles, such as DHCP, WINS, and DNS.

It is also relatively straightforward to demote a domain controller to a simple role server or remove any number of roles and features from a server.

Server Manager is the key configuration console you will use for installing server roles and features on your server. It can be configured to open automatically as soon as you log in to the
Windows console or desktop.

Types of roles

Let’s look at the various roles and features you can install on Windows Server 2008.

Active Directory Certificate Services (AD CS)
AD CS role services install on a number of operating systems, including Windows Server 2008, Windows Server 2003, and Windows 2000 Server. Naturally the fullest implementation of AD CS is only possible on Windows Server 2008. You can deploy AD CS as a single standalone certification authority (CA), or you can deploy multiple servers and configure them as root, policy, and certificate issuing authorities. You also have a variety of Online Responder configuration possibilities.

Active Directory Domain Services (AD DS)
This is the role in the Windows Server 2008 operating system that stores information about users, computers, and other resources on a network. AD DS is also used for directory-enabled applications such as Microsoft Exchange Server.

Active Directory Federation Services (AD FS)
AD FS employs technology that allows users over the life of a single online session to securely share digital identity and entitlement rights, or ‘”claims” across security and enterprise boundaries. This role – introduced and supported on all operating systems since Microsoft Windows Server 2003 R2 – provides Web Single Sign-On (SSO) services to allow a user to access
multiple, related Web applications.

Active Directory Lightweight Directory Services (AD LDS)
This service is ideal if you are required to support directory-enabled applications. AD LDS is a Lightweight Directory Access Protocol (LDAP) compliant directory service.

Active Directory Rights Management Services (AD RMS)
This service augments an organization’s security strategy by protecting information through persistent usage policies. The key to the service is that the right management policies are bound to the information no matter where it resides or to where it is moved. AD RMS is used to lock down documents, spreadsheets, e-mail, and so on from being infiltrated or ending up in the wrong hands. AD RMS, for example, prevents e-mails from being accidentally forwarded to the wrong people.

The Application Server role
This role supports the deployment and operation of custom business applications that are built with Microsoft .NET Framework. The Application Server role lets you choose services for applications that require COM+, Message Queuing, Web services, and Distributed Coordinated Transactions.

DHCP and DNS
These two roles install these two critical network service services required for every network. They support Active Directory integration and support IPv6. WINS is not classified as a key role for Windows Server 2008, and you install it as a feature, discussed later.

Fax Server role
The fax server lets you set up a service to send and receive faxes over your network. The role creates a fax server and installs the Fax Service Manager and the Fax service on the server.

File Server role
This role lets you set up all the bits, bells, and whistles that come with a Windows file server. This role also lets you install Share and Storage Management, the Distributed File System (DFS), the File Server Resource Manager application for managing file servers, Services for Network File System (NFS), Windows File Services, which include stuff like the File Replication Service (FRS), and so on.

Network Policy and Access Services
This provides the following network connectivity solutions: Network Access Protection (NAP), the client health policy creation, enforcement, and remediation technology; secure wireless and wired access (802.1X), wireless access points, remote access solutions, virtual private network (VPN) services, Radius, and more.

Print Management role
The print services provide a single interface that you use to manage multiple printers and print servers on your network.

Terminal Services role
This service provides technologies that enable users to access Windows-based programs that are installed on a terminal server. Users can execute applications remotely (they still run on the remote server) or they can access the full Windows desktop on the target server.

Universal Description, Discovery, and Integration (UDDI)
UDDI Services provide capabilities for sharing information about Web services. UDDI is used on the intranet, between entities participating on an extranet, or on the Internet.

Web Server role
This role provides IIS 7.0, the Web server, ASP.NET, and the Windows Communication Foundation (WCF).

Windows Deployment Services
These services are used for deployment of new computers in medium to large organizations.

Features

Server Manager also lets you install dozens of features on Windows Server 2008. These so-called features are actually programs or supporting layers that support or augment the functionality of one or more roles, or simply add to the functionality of the server. A good example of a feature is the clustering service. Now called Failover Clustering, this feature can be used to support mission-critical roles such as File Services, Printer Services, and DHCP Server, on server clusters. This provides for higher availability and performance.

Other features you will likely install include SMTP Server, Telnet Client and Server, Group Policy Management (for use with Active Directory), Remote Assistance, and more.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Reasons to Upgrade Your DNS Server to Windows Server 2008 R

Posted by Alin D on October 7, 2010

Introduction

DNS is the backbone of network communications. Without DNS you would be forced to memorize the IP addresses of all the clients and servers on your network. That might have been something you could have done in 1985, but it’s really not realistic as we enter into the second decade of the 21st century. And DNS is going to be even more important as we slowly transition from IPv4 to IPv6. While some talented administrators could realistically remember the dotted quad addresses for dozens or maybe even hundreds of servers, that just isn’t going to happen with IPv6; where the IP addresses are 128bit hexadecimal numbers. IPv6 is going to bring DNS back to the forefront of your awareness.

Because DNS is going to be ever more important, you’re going to need to be sure that your DNS server solution is secure. Historically, there was a large amount of implicit trust in DNS deployments. There was an implicit trust that the DNS client could trust the DNS server, and there was implicit trust that the records returned from the DNS server to the DNS client were valid. While this “gentleman’s agreement” has worked reasonably well for the last few decades, the time has come when we need to be able to guarantee that the information provided by the DNS server is valid and that client/server DNS communications are secure.

This has me thinking about the Windows Server 2008 R2 DNS server. There are several new features in the Windows Server 2008 R2 DNS server that you can use to improve the overall security of your DNS infrastructure. These include:

  • DNS Security Extensions (DNSSEC)
  • Control over DNS devolution behavior
  • DNS cache locking
  • DNS Socket Pool

In this article, I’m going to provide you a brief overview of each of these features and how you can use them to create a more secure DNS for your network.

DNS Security Extensions (DNSSEC)

DNSSEC is a group of specifications from the Internet Engineering Task Force (IETF) that provide for origin authentication of DNS data, authenticated denial of existence and data integrity (not data confidentiality). The purpose of DNSSEC is to protect against forged DNS information (for example, DNS cache poisoning), by using digital signatures.DNSSEC is actually a collection of new features added to the DNS client/server interaction that help increase the security of the basic DNS protocols. The core DNSSEC features are specified in:

  • RFC 4033
  • RFC 4034
  • RFC 4035

DNSSEC introduces several new terms and technologies on both the client and server side. For example, DNSSEC adds four new DNS resource records:

  • DNSKEY
  • RRSIG
  • NSEC
  • DS

Windows Server 2008 R2 Implementation

Windows Server 2008 R2 and Windows 7 are the first Microsoft operating systems to support DNSSEC. You can now sign and host DNSSEC signed zones to increase the level of security for your DNS infrastructure. The following DNSSEC related features are introduced in Windows Server 2008 R2:

  • The ability to sign a zone (that is, to provide the zone a digital signature)
  • The ability to host signed zones
  • New support for the DNSSEC protocol
  • New support for DNSKEY, RRSIG, NSEC, and DS resource records.

DNSSEC can add origin authority (confirmation and validation of the original of the DNS information presented to the DNS client), data integrity (provide assurance that the data has not been changed), and authenticated denial of existence to DNS (a signed response confirming that the record does not exist).

Windows 7/Server 2008 R2 DNS Client Improvements

In addition to the DNS server updates in Windows Server 2008 R2, there are some improvements in the Windows 7 DNS client (which also includes the DNS client service in Windows Server 2008 R2):

  • The ability to communicate awareness of DNSSEC in DNS queries (which is required if you decide to used signed zones)
  • The ability to process the DNSKEY, RRSIG, NSEC, and DS resource records.
  • The ability to determine if the DNS server with to which it had sent a DNS query has performed validation for the client.

DNSSEC and the NRPT

If you’re acquainted with DirectAccess, you might be interested in the fact that DNSSEC leverages the Name Resolution Policy Table (NRPT). The DNS client DNSSEC related behavior is set by the NRPT. The NRPT enables you to create a type of policy based routing for DNS queries. For example, you can configure the NRPT to send queries for contoso.com to DNS server 1, while queries for all other domains are sent to the DNS server address configured on the DNS client’s network interface card. You configure the NRPT in Group Policy. The NRPT is also used to enable DNSSEC for defined namespaces, as seen in Figure 1 below.


Figure 1

Understanding how DNSSEC works

A key feature of DNSSEC is that it enables you to sign a DNS zone – which means that all the records for that zone are also signed.The DNS client can take advantage of the digital signature added to the resource records to confirm that they are valid. This is typical of what you see in other areas where you have deployed services that depend on PKI. The DNS client can validate that the response hasn’t been changed using the public/private key pair. In order to do this, the DNS client has to be configured to trust the signer of the signed zone.

The new Windows Server 2008 R2 DNSSEC support enables you to sign file-based and Active Directory integrated zones through an offline zone signing tool. I know it would have been easier to have a GUI interface for this, but I guess Microsoft ran out of time or figured that not enough people would actually use this feature to make it worthwhile to make the effort to create a convenient graphical interface for signing a zone. The signing process is also done off-line. After the zone is signed, it can be hosted by other DNS servers using typical zone transfer methodologies.

When configured with a trust anchor, a DNS server is able to validate DNSSEC responses received on behalf of the client. However, in order to prove that a DNS answer is correct, you need to know at least one key or DS record that is correct from sources other than the DNS. These starting points are called trust anchors.

Another change in the Windows 7 and Windows Server 2008 R2 DNS client is that it acts as a security-aware stub resolver. This means that the DNS client will let the DNS server handle the security validation tasks, but it will consume the results of the security validation efforts performed by the DNS server. The DNS clients take advantage of the NRPT to determine when they should check for validation results. After the client confirms that the response is valid, it will return the results of the DNS query to the application that triggered the initial DNS query.

Using IPsec with DNSSEC

In general, it’s a good idea to use IPsec to secure communications between all machines that participate on your managed network. The reason for this is that it’s very easy for an intruder to put network analysis software on your network and intercept and read any non-encrypted content that moves over the wire. However, if you use DNSSEC, you’ll need to be aware of the following when crafting your IPsec policies:

  • DNSSEC uses SSL to secure the connection between the DNS client and server. There are two advantages of using SSL: first, it encrypts the DNS query traffic between the DNS client and DNS server, and second, it allows the DNS client to authenticate the identity of the DNS server, which helps ensure that the DNS server is a trusted machine and not a rogue.
  • You need to exempt both TCP port 53 and UDP port 53 from your domain IPsec policy. The reason for this is that the domain IPsec policy will be used and DNSSEC certificate-based authentication will not be performed. The end result is that the client will fail the EKU validation and end up not trusting the DNS server.

Control Over DNS Devolution

DNS devolution has been available for a long time in Windows DNS clients. No, it doesn’t mean that the operating systems are less evolved. Devolution allows your client computers that are members of a subdomain to access resources in the parent domain without the need to provide the exact FQDN for the resource.

For example, if the client uses the primary DNS suffix corp.contoso.com and devolution is enabled with a devolution level of two, an application attempting to query the host name server1 will attempt to resolve:

  • server1.corp.contoso.com and
  • server1.corp.com

Notice that when the devolution level is set to two, the devolution process stops when there are two labels for the domain name (in this case, corp.com).

Now, if the devolution level were set to three, the devolution process would stop with server1.corp.contoso.com, since server1.contoso.com only has two labels in the domain name (contoso.com).

However, devolution is not enabled in Active Directory domains when:

  1. There is a global suffix search list assigned by Group Policy.
  2. The DNS client does not have the Append parent suffixes of the primary DNS suffix check box selected on the DNS tab in the Advanced TCP/IP Settings for IPv4 or IPv6 Internet Protocol (TCP/IP) Properties of a client computer’s network connection, as shown in Figure 2. Parent suffixes are obtained by devolution.


Figure 2

Previous versions of Windows had an effective devolution level of two. What’s new in Windows Server 2008 R2 is that you can now define your own devolution level, which gives you more control over the organizational boundaries in an Active Directory domain when clients try to resolve names in the domain. You can set the devolution level using Group Policy, as seen in Figure 3 below (Computer ConfigurationPoliciesAdministrative TemplatesNetworkDNS Client).


Figure 3

DNS Cache Locking

Cache locking in Windows Server 2008 R2 enables you to control the ability to overwrite information contained in the DNS cache. When DNS cache locking is turned on, the DNS server will not allow cached records to be overwritten for the duration of the time to live (TTL) value. This helps protect your DNS server from cache poisoning. You can also customize the settings used for cache locking.

When a DNS server configured to perform recursion receives a DNS request, it caches the results of the DNS query before returning the information to the machine that sent the request. Like all caching solutions, the goal is to enable the DNS server to provide information from the cache with subsequent requests, so that it won’t have to take the time to repeat the query. The DNS server keeps the information in the DNS server cache for a period of time defined by the TTL on the resource record. However, it is possible for information in the cache to be overwritten if new information about that resource record is received by the DNS server. One scenario where this might happen is when an attacker attempts to poison your DNS cache. If the attacker is successful, the poisoned cache might return false information to DNS clients and send the clients to servers owned by the attacker.

Cache locking is configured as a percentage of the TTL. For example, if the cache locking value is set to 25, then the DNS server will not overwrite a cached entry until 25% of the time defined by the TTL for the resource record has passed. The default value is 100, which means that the entire TTL must pass before the cached record can be updated. The cache locking value is stored in theCacheLockingPercent registry key. If the registry key is not present, then the DNS server will use the default cache locking value of 100. The preferred method of configuring the cache locking value is through the dnscmd command line tool.

An example of how to configure cache locking is seen in Figure 4 below. The percent value can range from 0 to 100.


Figure 4

Swimming in the Windows Server 2008 R2 DNS Socket Pool

OK, so you can’t swim in a socket pool. But what you can do with the Windows Server 2008 R2 DNS socket pool is enable the DNS server to use source port randomization when issuing DNS queries. Why would you want to do this? Because the source port randomization provides protection against some types of cache poisoning attacks, such as those described over here.

The initial fix included some default settings, but with Windows Server 2008 R2 you can customize socket pool settings.

Source port randomization protects against DNS cache poisoning attacks. With source port randomization, the DNS server will randomly pick a source port from a pool of available sockets that it opens when the service starts. This helps prevent an unauthenticated remote attacker from sending specially crafted responses to DNS requests in order to poison the DNS cache and forward traffic to locations that are under the control of an attacker.

Previous versions of the Windows DNS server used a predictable collection of source ports when issuing DNS query requests. With the new DNS socket pool, the DNS server will use a random port number selected from the socket pool. This makes it much more difficult for an attacker to guess the source port of a DNS query. To further thwart  the attacker, a random transaction ID is added to the mix, making it even more difficult to execute the cache poisoning attack.

The socket pool starts with a default of 2500 sockets. However, if you want to make things even tougher for attackers, you can increase it up to a value of 10,000. The more sockets you have available in the pool, the harder it’s going to be to guess which socket is going to be used, thus frustrating the cache poisoning attacker. On the other hand, you can configure the pool value to be zero. In that case, you’ll end up with a single socket value that will be used for DNS queries, something you really don’t want to do. You can even configure certain ports to be excluded from the pool.

Like the DNS cache feature, you configure the socket pool using the dnscmd tool. The figure below shows you an example using the default values.


Figure 5

Summary

In this article we went over several new features included in the Windows Server 2008 R2 server and Windows 7 DNS client that increase the security and performance of your DNS infrastructure. The combination of DNSSEC, improvements in control over DNS devolution, security enhancements in the DNS cache and the DNS socket pool all provide compelling reasons to upgrade your DNS servers to Windows Server 2008 R2.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Exchange 2010 Database Availability Group

Posted by Alin D on October 5, 2010

Database Availability Group (DAG) is the new Exchange 2010 high availability feature. This feature provides data availability together with service availability. DAG now is the only built-in way to protect data in Exchange 2010.

This article is composed of an introductory section where we look at the key facts of Database Availability Groups and a walk through the implementation of DAG. The introductory sections where authored after researching various TechNet articles and here I am reproducing salient information taken from TechNet. This information was restructured for Administrators to have a central reference point rather than having to go through many TechNet articles. So credits for the article introduction go to TechNet.

In Exchange 2007, we have Local Continuous Replication (LCR), Cluster Continuous Replication (CCR), Single Copy Clusters (SCC) and Standby Continuous Replication (SCR). Exchange 2010 combined on-site data replication (CCR) and off-site data replication (SCR) to produce one method to protect mailbox databases.

DAG is a group of up to 16 mailbox servers that host a set of databases and provide automatic database-level recovery from failures that affect individual servers or databases. Those mailbox servers can be geographically dispersed to replicate mailbox databases across sites. Any server in a DAG can host a copy of a mailbox database from any other server in the DAG.

Storage groups no longer exist in Exchange 2010. The mailbox database name is unique within an Exchange 2010 organization, they are now global objects and, as a result, the primary management interfaces for Exchange databases has moved within the Exchange Management Console from the Mailbox node under Server Configuration to the Mailbox node under Organization Configuration. Also continuous replication now operates at the database level because of Storage Group removal from Exchange 2010.

Mailbox  Databases under Organization Configuration

In Exchange 2007 the Microsoft Exchange Replication service on the passive node connects to the share on the active node and copies, or pulls, the log files using the Server Message Block (SMB) protocol. In Exchange 2010 SMB is no longer used for Log shipping and seeding. Instead, Exchange 2010 continuous replication uses a single administrator-defined TCP port, by default DAG uses port 64327. Also, Log shipping no longer uses a pull model where the passive copy pulls the closed log files from the active copy; now the active copy pushes the log files to each configured passive copy.

Another good enhancement is that seeding is no longer restricted to using only the active copy of the database. Passive copies of mailbox databases can now be specified as sources for database copy seeding and reseeding. In addition, Exchange 2010 includes built-in options for network encryption and compression for the data stream.

There are two editions of Exchange 2010, standard and enterprise editions. Both editions include DAGs, but standard edition is limited to 5 databases per server while the enterprise edition can host up to 100 databases per server. Note that if you want to use DAG with failover clustering, you have to install Exchange 2010 on the enterprise editions of Windows Server 2008. And all DAG members should run the same operating system, either Windows Server 2008 on all members or Windows Server 2008 R2 on all members.

Creating and Configuring DAG

There are specific networking requirements that must be met for each DAG and for each DAG member. Each DAG has a single MAPI network, which is used by other servers (e.g., other Exchange 2010 servers, directory servers, witness servers, etc.) to communicate with the DAG member, and zero or more Replication networks, which are networks that are dedicated to log shipping and seeding. However, unlike previous Exchange versions, database availability group configuration is supported using single network.

An IP address (either IPv4 or both IPv4 and IPv6) must be assigned to the DAG. This IP address must be on the subnet intended for the MAPI network.

You can assign static IP addresses to the DAG by using the DatabaseAvailabilityGroupIpAddresses parameter. If you use the Exchange Management Console (EMC) to create the DAG, or if you use the New-DatabaseAvailabilityGroup cmdlet without the DatabaseAvailabilityGroupIpAddresses parameter, the task will configure the DAG to use Dynamic Host Configuration Protocol (DHCP) to obtain the necessary IP addresses. If you don’t want the DAG to use DHCP, you can use the Set-DatabaseAvailabilityGroup cmdlet to configure one or more IP addresses for the DAG after it has been created.

Now we will create the DAG. In EMC | Organization Configuration | Mailbox. Click the Database Availability Groups tab, right-click and select New Database Availability Group:

New Database  Availability Group

Type a name for the DAG. Remember that the DAG must have a unique name inside the Exchange organization and it can consist of up to 15 characters. I will select the server that hosts the Hub Transport and Client Access server roles as a witness server, and define C:DAG1-WS as the witness directory

New Database  Availability Group - Configuration

Click Next to start creating the DAG:

New Database  Availability Group - Finished

After DAG has been created, we can run the command “Get-DatabaseAvailabilityGroup DAG1 | fl” to see the default properties of the DAG:

Get-DatabaseAvailabilityGroup

Note that the DAG has no IP addresses configured. I don’t have DHCP in my test environment, so we have to configure an IP address for the DAG. To do so, we will use the command:
Set-DatabaseAvailabilityGroup DAG1 -DatabaseAvailabilityGroupIpAddresses 20.20.0.6

DatabaseAvailabilityGroupIpAddresses

Now you can add servers to the DAG. In EMC | Organization Configuration | Mailbox, click the Database Availability Groups tab, right-click the DAG you want to manage, and then click Manage Database Availability Group Membership:

Manage Database  Availability Group Membership

Click Add then select the servers you want to add. I will chose one server then add the second server using Exchange Management Shell.

Manage Database  Availability Group Membership - Add Server

Click Manage to add the server as a member to the DAG

Manage Database  Availability Group Membership - Finished

Now we will configure the DAG networks to allow the replication on one subnet other than the MAPI network subnet. From the Database Availability Groups tab, select the DAG. At the bottom pane we can then configure the network properties for the selected DAG.

DAG Networks

I will add an IPv4 subnet and remove the IPv6 subnet. Also make sure that the Enable replication check box is selected to allow the replication to happen over this network

DAG Network  Properties

Next we will disable the replication on the MAPI network. Open the properties of the second network which is configured with the IP of your internal network, uncheck Enable replication check box

Disable  Replication

Now we will add the second server using the command:
Add-DatabaseAvailabilityGroupServer -Identity DAG1 -MailboxServer Ex14Mbx2 -Verbose

Add-DatabaseAvailabilityGroupServer

The -Verbose parameter instructs the command to provide detailed information about the operation.

To check the memebers after we have two members in the DAG, we can use the command:
Get-DatabaseAvailabilityGroup

Get-DatabaseAvailabilityGroup

Adding Mailbox Database Copies

Now that we have configured the DAG, we will continue by adding mailbox database copies to start protecting our databases.

We will configure the following scenario:
In all we have two mailbox servers, Ex14Mbx1 and Ex14Mbx2, with two mailbox databases, Main-DB01 and Main-DB02. Ex14Mbx1 holds the active Main-DB01 database copy and a passive copy of Main-DB02, the same applies for Ex14Mbx2; it holds the active Main-DB02 database copy and a passive copy of Main-DB01.

Completed DAG  Setup

From EMC | Organization Configuration | Mailbox, click the Database Management tab, and right-click the database for which we want to add a copy

Add Mailbox  Database Copy

In the Add Mailbox Database Copy window click browse and select the DAG member that you will configure to host the database copy

Add Mailbox  Database Copy - Configuration

In the Add Mailbox Database Copy window, there is an Activation preference number. This value is used when multiple database copies are added for one database and all the copies meet the same criteria for activation. In this case the copy assigned the lowest activation preference number will be activated.

Click add and wait for the command to complete successfully

Add Mailbox  Database Copy - Finished

After the copy has been created, we can check the health of the database copy using Exchange Management Console. In Exchange 2007 we had to use the Exchange Management Shell to check for mailbox databases and replication health. Now we can use the Database Management tab and look at the Copy Status colomn

Database  Management - Copy Status

Mailbox Database Switchover

The Mailbox server that hosts the active copy of a database is called the mailbox database master. Sometimes you may need to take the mailbox database master down for maintenance. In this case we need to move the active mailbox database to another mailbox server. This process is called a database switchover. In a database switchover, the active copy of a database is dismounted on the master and a passive copy of that database is mounted. The active mailbox database is mounted on another mailbox server which in its turn becomes the master.

To activate the mailbox database on another server, in EMC | Organization Configuration | Mailbox, click the Database Management tab, at the bottom pane right-click the copy that is hosted on the server on which you want to activate the copy

Activate  Database Copy

The following drop down list will appear to select from:

Activate  Database -  Override Mount

The options in the list are:

  • Lossless If you specify this value, the database doesn’t automatically mount until all logs that were generated on the active copy have been copied to the passive copy.
  • Good Availability If you specify this value, the database automatically mounts immediately after a failover if the copy queue length is less than or equal to 6. Exchange will attempt to replicate the remaining logs to the passive copy and then mounts the database. If the copy queue length is greater than 6, the database doesn’t mount.
  • Best Effort If you specify this value, the database automatically mounts regardless of the size of the copy queue length. Because the database will mount with any amount of log loss, using this value could result in a large amount of data loss.
  • Best Availability If you specify this value, the database automatically mounts immediately after a failover if the copy queue length is less than or equal to 12. The copy queue length is the number of logs recognized by the passive copy that needs to be replicated. If the copy queue length is more than 12, the database doesn’t automatically mount. When the copy queue length is less than or equal to 12, Exchange attempts to replicate the remaining logs to the passive copy and then mounts the database.

Click ok to start activating the copy on the second server. When the process finishes we can see the results in the console:

Activate  Database - Complete

We can also activate the mailbox database copy on another server through Exchange Management Shell using the command:
Move-ActiveMailboxDatabase -Identity Main-DB02 -ActivateOnServer Ex14Mbx1

Move-ActiveMailboxDatabase

Conclusion

In this article we went through a brief overview of database availability groups. We introduced DAG, created and configured DAG to include two member servers. We created mailbox database copies within the DAG and tested moving the database copies between member servers.

Posted in Exchange | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »

How to setup a new Windows 2008 R2 Network Load Balancing

Posted by Alin D on September 10, 2010

How to setup a new Windows 2008 R2 Network Load Balancing.

1- Install and configure Network Load Balancing (NLB). To perform the following procedures you must use an account that belongs to the local Administrators security group on each host. Perform the following procedures in both hosts.

  • Click Start, click Administrative Tools, and then click Server Manager.
  • In the Features Summary area of the Server Manager main window, click Add Features.
  • In the Add Features Wizard, select the Network Load Balancing check box.
  • Click Install.
  • Alternatively, you can install NLB by typing the following command (You must run the cmd with elevated rights – Right click and choose the option run as Administrator): “servermanagercmd.exe -install nlb“.
  • After installing NLB, check the properties of your network adapter for the NLB option.
  • 2- Create the new NLB cluster (Perform the following steps in one of the nodes).

  • To open Network Load Balancing Manager, Right-click Network Load Balancing Clusters, and then click New Cluster.
  • To connect to the host that is to be a part of the new cluster, in the Host text box, type the name of the host, and then click Connect.
  • Select the interface that you want to use with the cluster, and then click Next. (The interface hosts the virtual IP address and receives the client traffic to load balance.)
  • In Host Parameters, select a value in Priority (Unique host identifier). This parameter specifies a unique ID for each host. The host with the lowest numerical priority among the current members of the cluster handles all of the cluster’s network traffic that is not covered by a port rule.
    You can override these priorities or provide load balancing for specific ranges of ports by specifying rules on the Port rules tab of the Network Load Balancing Properties dialog box.
    In Host Parameters, you can also add dedicated IP addresses, if necessary.
  • Click Next to continue.
  • In Cluster IP Addresses, click Add and type the cluster IP address that is shared by every host in the cluster. NLB adds this IP address to the TCP/IP stack on the selected interface of all hosts that are chosen to be part of the cluster. (NLB does not support Dynamic Host Configuration Protocol (DHCP). NLB disables DHCP on each interface that it configures, so the IP addresses must be static.)
  • Click Next to continue.
  • In Cluster Parameters, select values in IP Address and Subnet mask (for IPv6 addresses, a subnet mask value is not needed). Type the full Internet name that users will use to access this NLB cluster.
  • In Cluster operation mode, click Unicast to specify that a unicast media access control (MAC) address should be used for cluster operations. In unicast mode, the MAC address of the cluster is assigned to the network adapter of the computer, and the built-in MAC address of the network adapter is not used. We recommend that you accept the unicast default settings.
  • Click Next to continue.
  • In Port Rules, click Edit to modify the default port rules, if needed.
  • To add more hosts to the cluster, right-click the new cluster, and then click Add Host to Cluster. Configure the host parameters (including host priority, dedicated IP addresses, and load weight) for the additional hosts by following the same instructions that you used to configure the initial host. Because you are adding hosts to an already configured cluster, all the cluster-wide parameters remain the same.
  • DONE!!!

    Refrences:
    Technet Network Load Balancing

    Posted in TUTORIALS, Windows 2008 | Tagged: , , , , , , , , , , , | Leave a Comment »