Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘IPv4’

How to send SharePoint 2010 notifications through Exchange 2010

Posted by Alin D on November 27, 2011

There are plenty of reasons to set up SharePoint 2010 email notifications. For example, you can use them to alert SharePoint administrators of configuration changes. You can also inform users whenever additions are made to a document library or when existing documents are modified.

No matter which notifications you want to set up, you must have a way to deliver them. Follow these steps to configure SharePoint 2010 to send email notifications through an Exchange 2010 server.

Step 1. Set up a SharePoint server email address


To begin, first give your SharePoint server an email address — even though SharePoint never actually logs onto the mailbox. The address will be used as both the From: and Reply To:addresses for outbound messages. After creating the Exchange mailbox, delegate permission for the mailbox to the SharePoint administrator.

Step 2. Create an Exchange 2010 receive connector


Now create a receive connector on your Exchange 2010 server. This lets Exchange 2010 receive messages directly from SharePoint 2010. To create the receive connector, open the Exchange Management Console (EMC) and navigate to Microsoft Exchange On-Premise -> Server Configuration -> Hub Transport. Click the New Receive Connector link to launch the New Receive Connector wizard.

Choose a name for your new receive connector and specify what you will use the connector for. Name the connector something descriptive like “SharePoint 2010 Outbound Messages,” then set the connector’s intended use to Custom. Click Next to continue.

The next screen asks which IP addresses you want to use to receive mail. Leave the default setting of All Available IPv4 on Port 25, then click Next.

You will see a screen that asks which remote IP addresses the connector should be configured to receive mail from. Click the Add button, then enter the IP address for your SharePoint 2010 server. The connector is set to receive mail from all addresses by default. Remove this address range

so that the connector only accepts messages from your SharePoint server.

Click Next and you’ll see a summary of the configuration information you’ve entered. Make sure that the information checks out, then click New. When Exchange finishes creating the new receive connector, click Finish.

Finally, configure Exchange to accept non-authenticated connections. While this sounds risky, it’s safe because you’ve already configured Exchange to only accept connections from the SharePoint server. To adjust the permissions, right-click on the receive connector, then click Properties. When the Properties sheet appears, select the Permission Groups tab. Select the Anonymous Users checkbox and click OK.

Step 3. Configure your SharePoint 2010 server

Now that you’ve prepared your Exchange 2010 server, you must configure SharePoint to use it for outbound messages. To begin, open the SharePoint 2010 Central Administration Console. Click on System Settings, then on the Configure Outgoing E-Mail Settings option.

When prompted, enter your Exchange server’s IP address into the Outbound SMTP Serverfield. Next, enter your previously designated SharePoint mailbox’s address into the From Address and Reply-To Address fields. Click OK to complete the configuration process.

Step 4. Create and test SharePoint alerts

Now that you have configured SharePoint to use Exchange for outbound mail, there is a final step. You must also configure SharePoint 2010 to send alerts. To find out how to create SharePoint 2010 alerts, see my previous tip on the topic.

SharePoint is configured to notify the AdministratorUser1 and User2 whenever you make a change to the document library. You’ll also notice that SharePoint is configured to send the alerts via email.

 

Posted in TUTORIALS | Tagged: , , , , | Leave a Comment »

Best practice for a good Microsoft IIS 7 Security

Posted by Alin D on June 21, 2011

Microsoft’s Internet Information Services (IIS) Web server has presented enterprises with more than its share of security problems over the years, including the infamous Code Red worm nearly a decade ago. A key security concern with IIS has always been the number of features that are automatically installed and enabled by default, such as scripting and virtual directories, many of which proved vulnerable to exploit and led to major security incidents.

With the release of IIS 6 a few years ago, a “lockdown by default” approach was introduced with several features either not being installed or installed but disabled by default. IIS 7, the newest iteration, goes even further. It’s not even installed on Windows Server 2008 by default, and when it is installed, the Web server is configured to serve only static content with anonymous authentication and local administration, resulting in the simplest of Web servers and the smallest attack surface possible to would-be hackers.

This is possible because IIS 7 is completely modularized. Let’s briefly dig into why that is and how it enables a more secure product. Essentially administrators can select from more than 40 separate feature modules to completely customize their installation. By only installing the feature modules required for a particular website, administrators can greatly reduce the potential attack surface and minimize resource utilization.

Be aware, however, this is true only with a clean install. If you are upgrading your Windows OS and running an earlier version of IIS, all the metabase and IIS state information is gathered and persevered. Consequently, many unnecessary Web server features can be installed during an upgrade. Therefore, it is good practice for an organization to revisit

its application dependencies on IIS functionality after an upgrade and uninstall of any unneeded IIS modules.

Fewer components also means there are fewer settings to manage and problems to patch as it’s only necessary to maintain the subset of modules that are actually being used. This reduces downtime and improves reliability. Also, the IIS Management Console, with all its confusing tabs, has been replaced with a far more intuitive GUI tool, which makes it easier to visualize and understand how security settings are implemented. For example, if the component supporting basic authentication is not installed on your system, the configuration setting for it doesn’t appear and confuse matters.

So what components are likely to be needed to run a secure IIS? The first six listed below will be required by any website running more than just static pages, while seven and eight will be necessary for anyone needing to encrypt data between the server and client, while shared configuration is useful when you have a Web farm and want each Web server in the farm to use the same configuration files and encryption keys:

  1. Authentication includes integrated Windows authentication, client certificate authentication and ASP.NET forms-based authentication, which lets you manage client registration and authentication at the application level, instead of relying on Windows accounts. 
  2. URL Authorization, which integrates nicely with ASP.NET Membership and Role Management, grants or denies access to URLs within your application based on user names and roles so you can prevent users who are not members of a specific group from accessing restricted content. 
  3. IPv4 Address and Domain Name Rules provide content access based on IP Address and Domain Name. The new property “allowUnlisted” makes it a lot easier to prevent access to all IP addresses unless they are listed. 
  4. CGI and ISAPI restrictions allow you to enable and disable dynamic content in the form of CGI files (.exe) and ISAPI extensions (.dll). 
  5. Request filters incorporate the functionality of the UrlScan tool restricting the types of HTTP requests that IIS 7 will process by rejecting requests containing suspicious data. Like Apache’s mod_rewrite, it can use regular expressions to block attacks or modify requests based on verb, file extension, size, namespace and sequences. 
  6. Logging now provides real-time state information about application pools, processes, sites, application domains and running requests as well as the ability to track a request throughout the complete request-and-response process. 
  7. Server Certificates 
  8. Secure Sockets Layer 
  9. Shared Configuration

Other features that enhance the overall security of IIS 7 are new built-in user and group accounts dedicated to the Web server. This enables a common security identifier (SID) to be used across machines, which simplifies access control list management, and application pool sandboxing. Server administrators meanwhile have complete control over what settings are configurable by application administrators, while allowing them to make any configuration changes directly in their application without having administrative access to the server.

IIS 7 is quite a different beast as compared with previous incarnations, and that’s a good thing. It has been designed and built along classic security principles and it gives Windows-based organizations a Web server that can be more securely configured and managed than ever before. There may still not be enough from a security perspective to sway Linux and Apache shops to change to IIS anytime soon, but Microsoft has definitely narrowed the security gap between them. It will take administrators a while to get use to the new modular format and administrative tools and tasks. The training and testing time will be worth it though as it is an OS and framework that administrators are familiar with.

 

 

 

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Debugging Tools in Windows Server for TCP/IP – Ping, Tracert and Pathping

Posted by Alin D on February 7, 2011

TCP/IP is the backbone for communication and transportation in Windows Server, prior to  communicating between machines, TCP/IP will need to first be configured. TCP/IP is installed by default in  Windows Server 2008 R2 and during the operating system installation you can also add or remove TCP/IP . If a TCP/IP connection should fails, you will need to identify the cause and point of failure. Windows Server ships with several useful tools which can troubleshoot connections and also verify connectivity. In this series of articles we will look at Ping, Tracert, Pathping, IPconfig, Arp, Netstat, Route, Nslookup and DCDiag.  Most of the tools are been updated to include switches both  for IPv4 and IPv6.

Ping

Ping stands for Packet Internet Groper and can be used to send an ICMP  (Internet Control Message Protocol) echo request and echo reply which will verify the availability of local or remote machines. Ping can be thought of as a utility which sends a message to another machine requesting a confirmation if the machine is still there. By default,  Ping sends four ICMP packages and awaits for the responses back in one second. This default setting can however be changed and the number of packages sent and the await time for responses can be altered through the options available for Ping.
As well as verifying the availability of  remote machines, Ping can assist in  determining name resolution issues. To use Ping, go to a command prompt and enter Ping Targetname. Several different parameters are available to be used with Ping. To show all the parameters enter Ping /? or Ping (with no parameters). The parameters for use with the Ping command are as below:

  • -4 : Specifies that IPv4 should be used to ping, this  is not required for identifying the target machine with a IPv4 address but it will be required only to identify the target machine by name.
  • -6 : Specifies that IPv6 should be used to ping, similar to –4 this is not required for identifying the target machine with an IPv6 address but it will be required only to identify the target machine by name.
  • -a : Resolves the IP address to the hostname which is displayed if this command is successful.
  • -f : Requests that the echo back messages are sent with a  Don’t Fragment flag in packets (only available in IPv4).
  • -i ttl : Increases the timeout when using slow connections, also sets the value of TTL (Time to Live) the max value for this is 255.
  • -j HostList : Routes the packets using the host list (this is a listing of IP addresses which are separated by spaces), hosts can be separated by intermediate gateways (ie loose source route).
  • -k HostList : Similar to –j but the hosts can’t be separated by intermediate gateways (ie strict source route).
  • -l size : Specifies the length (in bytes) of the packets – default is 32 and the max is 65,527.
  • -n count : Specifies the number of packets which are sent – default is 4.
  • -r count : Specifies the route for the outgoing and the incoming packets, you can specify a count which is equal to or higher than the number of hops between source and destination. The count must be between 1 to 9.
  • -R : Specifies that the round-trip path should be traced (this is only available on IPv6).
  • -s count : Sets a time stamp for the number of hops specified by count, this count needs to be between 1 and 4.
  • -S SrcAddr : Sets the source address  (this is only available on IPv6).
  • -t : Specifies that Ping should continue sending packets to the destination until interrupted. To stop and display statistics, press Ctrl+Break. To stop and quit PING, press Ctrl+C.
  • -v TOS : Sets the value of the type of service in the packet sent (default for this setting is zero). TOS is specified by a decimal between 0 and 255.
  • -w timeout : Sets the time in milliseconds for the packet timeout. If the reply isn’t received before a timeout, the Request Timed Out error message will be shown. The default timeout is four seconds.
  • .TargetName : Sets the hostname or IP address of the destination to ping.

Sometimes remote hosts will be configured to ignore all Ping traffic to  prevent acknowledgment  security reasons. Therefore, the inability to ping a server may not always mean the server is not working.

Tracert

Tracert is typically used to determine the path or route taken to a final destination by sending ICMP packets with varying TTL (Time to Live) values. Every router the packet encounters on the way reduces the value of the TTL by at a minimum of one; invariably TTL is a hop count. The path will be determined by looking at the ICMP Time Exceeded messages returned by the intermediate routers. Not all routers will return Time Exceeded messages for expired TTL values and are therefore not captured by the Tracert tool. In these cases, asterisks are shown for that particular hop. To show the different parameters which are available to be used with Tracert, open the command prompt and enter tracert (with no parameters) to show the help or type tracert /?.

The parameters associated with the Tracert tool  are as below:

  • -4 : Specifies  tracert.exe may only use IPv4 for the trace.
  • -6 : Specifies  tracert.exe can only use IPv6 for the trace.
  • -d : Prevents the resolution of the IP addresses of routers to their hostname, this is typically used  speeding up the Tracert results.
  • -h maximumHops : Sets the max number of hops taken before reaching the destination – default is 30 hops.
  • -j HostList : Specifies that packets must use the loose source route option, this allows successive intermediate destinations to be separated by one or more routers. The max number of addresses in the host list is 9. This is only useful only when tracing IPv4 addresses.
  • -R : Sends the packets to the destination in IPv6, using the destination as an intermediate destination and testing reverse route.
  • -S : Specifies which source address to use, this is only useful when tracing IPv6 addresses.
  • -w timeout : Sets the time in milliseconds to wait for the replies.

Tracert is a good utility for determining the number of hops and also the latency of communications between two end-points. Even when using high-speed Internet connections, if the Internet is congested or if the route a packet needs to follow necessitates forwarding the between several routers along the way, the performance and the latency will cause noticeable delays in  communication.

Pathping

The Pathping tool is a route tracing tool which combines features of both the Ping and Tracert commands with some additional information which neither of those two commands provide. Pathping is most suited for a network with routers or multiple routes between  source  and destination hosts. The Pathping command sends out packets to all  routers on its way to a destination, and subsequently gets the results from each packet that is returned from the router. Since Pathping calculates the loss of packets from each hop, it will be easy to determine which router is causing network issues.
To display the parameters in Pathping, open a command prompt and type Pathping /?.
The parameters for the Pathping command are as follows:

  • -4 : Specifies  tracert.exe may only use IPv4 for the trace.
  • -6 : Specifies  tracert.exe can only use IPv6 for the trace.
  • -g Host-list : Allows for the hosts being separated by intermediate gateways.
  • -h maximumHops : Sets the max number of hops prior to reaching a target – default is 30 hops.
  • -i address : Uses a specified source address.
  • -n : Specifies is  unnecessary to resolve the address to the hostname.
  • -p period : Sets the number of seconds to wait between pings – default is 0.25 seconds.
  • -q num_queries : Sets the query number to each host along the route –  default is 3.
  • -w timeout : Sets the timeout for replies in milliseconds.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Install and Configure Windows Server 2008 DHCP Server

Posted by Alin D on December 8, 2010

Introduction

Dynamic Host Configuration Protocol (DHCP) is a core infrastructure service on any network that provides IP addressing and DNS server information to PC clients and any other device. DHCP is used so that you do not have to statically assign IP addresses to every device on your network and manage the issues that static IP addressing can create. More and more, DHCP is being expanded to fit into new network services like the Windows Health Service and Network Access Protection (NAP). However, before you can use it for more advanced services, you need to first install it and configure the basics. Let’s learn how to do that.

Installing Windows Server 2008 DHCP Server

Installing Windows Server 2008 DCHP Server is easy. DHCP Server is now a “role” of Windows Server 2008 – not a windows component as it was in the past.

To do this, you will need a Windows Server 2008 system already installed and configured with a static IP address. You will need to know your network’s IP address range, the range of IP addresses you will want to hand out to your PC clients, your DNS server IP addresses, and your default gateway. Additionally, you will want to have a plan for all subnets involved, what scopes you will want to define, and what exclusions you will want to create.

To start the DHCP installation process, you can click Add Roles from the Initial Configuration Tasks window or from Server Manager à Roles à Add Roles.

dhcp
Figure 1: Adding a new Role in Windows Server 2008

When the Add Roles Wizard comes up, you can click Next on that screen.

Next, select that you want to add the DHCP Server Role, and click Next.

alt
Figure 2: Selecting the DHCP Server Role

If you do not have a static IP address assigned on your server, you will get a warning that you should not install DHCP with a dynamic IP address.

At this point, you will begin being prompted for IP network information, scope information, and DNS information. If you only want to install DHCP server with no configured scopes or settings, you can just click Next through these questions and proceed with the installation.

On the other hand, you can optionally configure your DHCP Server during this part of the installation.

In my case, I chose to take this opportunity to configure some basic IP settings and configure my first DHCP Scope.

I was shown my network connection binding and asked to verify it, like this:

alt
Figure 3: Network connection binding

What the wizard is asking is, “what interface do you want to provide DHCP services on?” I took the default and clickedNext.

Next, I entered my Parent Domain, Primary DNS Server, and Alternate DNS Server (as you see below) and clicked Next.

alt
Figure 4: Entering domain and DNS information

I opted NOT to use WINS on my network and I clicked Next.

Then, I was promoted to configure a DHCP scope for the new DHCP Server. I have opted to configure an IP address range of 192.168.1.50-100 to cover the 25+ PC Clients on my local network. To do this, I clicked Add to add a new scope. As you see below, I named the Scope WBC-Local, configured the starting and ending IP addresses of 192.168.1.50-192.168.1.100, subnet mask of 255.255.255.0, default gateway of 192.168.1.1, type of subnet(wired), and activated the scope.

alt
Figure 5: Adding a new DHCP Scope

Back in the Add Scope screen, I clicked Next to add the new scope (once the DHCP Server is installed).

I chose to Disable DHCPv6 stateless mode for this server and clicked Next.

Then, I confirmed my DHCP Installation Selections (on the screen below) and clicked Install.

alt
Figure 6: Confirm Installation Selections

After only a few seconds, the DHCP Server was installed and I saw the window, below:

alt
Figure 7: Windows Server 2008 DHCP Server Installation succeeded

I clicked Close to close the installer window, then moved on to how to manage my new DHCP Server.

How to Manage your new Windows Server 2008 DHCP Server

Like the installation, managing Windows Server 2008 DHCP Server is also easy. Back in my Windows Server 2008Server Manager, under Roles, I clicked on the new DHCP Server entry.

alt
Figure 8: DHCP Server management in Server Manager

While I cannot manage the DHCP Server scopes and clients from here, what I can do is to manage what events, services, and resources are related to the DHCP Server installation. Thus, this is a good place to go to check the status of the DHCP Server and what events have happened around it.

However, to really configure the DHCP Server and see what clients have obtained IP addresses, I need to go to the DHCP Server MMC. To do this, I went to Start à Administrative Tools à DHCP Server, like this:

alt
Figure 9: Starting the DHCP Server MMC

When expanded out, the MMC offers a lot of features. Here is what it looks like:

alt
Figure 10: The Windows Server 2008 DHCP Server MMC

The DHCP Server MMC offers IPv4 & IPv6 DHCP Server info including all scopes, pools, leases, reservations, scope options, and server options.

If I go into the address pool and the scope options, I can see that the configuration we made when we installed the DHCP Server did, indeed, work. The scope IP address range is there, and so are the DNS Server & default gateway.

alt
Figure 11: DHCP Server Address Pool

alt
Figure 12: DHCP Server Scope Options

So how do we know that this really works if we do not test it? The answer is that we do not. Now, let’s test to make sure it works.

How do we test our Windows Server 2008 DHCP Server?

To test this, I have a Windows Vista PC Client on the same network segment as the Windows Server 2008 DHCP server. To be safe, I have no other devices on this network segment.

I did an IPCONFIG /RELEASE then an IPCONFIG /RENEW and verified that I received an IP address from the new DHCP server, as you can see below:

alt
Figure 13: Vista client received IP address from new DHCP Server

Also, I went to my Windows 2008 Server and verified that the new Vista client was listed as a client on the DHCP server. This did indeed check out, as you can see below:

alt
Figure 14: Win 2008 DHCP Server has the Vista client listed under Address Leases

With that, I knew that I had a working configuration and we are done!

In Summary

In this article, you learned how to install and configure DHCP Server in Windows Server 2008. During that process, you learned what DHCP Server is, how it can help you, how to install it, how to manage the server, and how to configure DHCP server specific settings like DHCP Server scopes. In the end, we tested our configuration and it all worked! Good luck configuring your Windows Server 2008 DHCP Server!

Posted in Windows 2008 | Tagged: , , , , , , , , , | Leave a Comment »

Windows Server 2008 R2 DNSSEC–Secure DNS Connections

Posted by Alin D on November 26, 2010

Introduction

With the upcoming insurgence of IPv6, accessing computers through DNS names will be more important than ever. While those of us who have been working with IPv4 for many years have found it fairly easy to remember a great number of IPv4 addresses using the dotted quad system of IP network numbering, the fact is that the IPv6 address space is so large, and the hexadecimal format is so complex, that it is likely that only a handful of very dedicated nerds will be able to remember the IP addresses of more than a few computers on their networks. After all, each IPv6 address is 128 bits long – four times as long as an IPv4 address. This is what provides for the much larger address space to accommodate the growing number of hosts on the Internet, but it also makes it more difficult for us to remember addresses.45008_attack_on_resolver

The Problem: Non-secure Nature of the DNS Database

Given the increasing reliance on DNS that is sure to result, we are going to need a way to make sure that the entries in the DNS database are always accurate and reliable – and one of the most effective ways for us to ensure this is to make sure that our DNS databases are secure. Up until recently, DNS had been a relatively non-secure system, with a large number of assumptions made to provide a basic level of trust.

Due to this non-secure nature, there are many high profile instances where the basic trust has been violated and DNS servers have been hijacked (redirecting the resolution of DNS names to rogue DNS servers), DNS records spoofed, and DNS caches poisoned, leading users to believe they are connecting to legitimate sites when in fact they have been led to a web site that contains malicious content or collects their information by pharming. Pharming is similar to phishing, except that instead of following a link in email, users visit the site on their own, using the correct URL of the legitimate site, so they think they’re safe. But the DNS records have been changed to redirect the legitimate URL to the fake, pharming site.

The Solution: Windows Server 2008 R2 DNSSEC

One solution you can use on your intranet to secure your DNS environment is to use the Windows Server 2008 R2 DNSSEC. DNSSEC is a collection of extensions that improve the security of the DNS protocols. These extensions add origin authority, data integrity and authenticated denial of existence to DNS. The solution also adds several new records to DNS, including DNSKEY, RRSIGN, NSEC and DS.

How DNSSEC works

What DNSSEC does is allow all the records in the DNS database to be signed, with a method similar to that used for other digitally signed electronic communications, such as email. When a DNS client issues a query to the DNS server, it returns the digital signatures of the records that it returns. The client, which has the public key of the CA that signed the DNS records, is then able to decrypt the hashed value (signature) and validate the responses. In order to do this, the DNS client and server are configured to use the same trust anchor. A trust anchor is a preconfigured public key associated with a particular DNS zone.

DNS database signing is available for both file based (non-Active Directory integrated) and Active Directory integrated zones, and replication is available to other DNS servers that are authoritative for the zones in question.

The Windows 2008 R2 and Windows 7 DNS clients are configured, by default, as non-validating, security-aware, stub resolvers. When this is the case, the DNS client allows the DNS server to perform validation on its behalf, but the DNS client is able to accept the DNSSEC responses returned from the DNSSEC enabled DNS server. The DNS client itself is configured to use the Name Resolution Policy Table (NRPT) to determine how it should interact with the DNS server. For example, if the NRPT indicates that the DNS client should secure the connection between the DNS client and server, then certificate authentication can be enforced on the query. If security negotiations fail, it is a strong indication that there is a trust issue in the name resolution process, and the name query attempt will fail. By default, when the client returns to the DNS query response to the application that made the request, it will only return this information if the DNS server has validated the information.

Ensuring Valid Results

So there are really two methods that are used to ensure that the results of your DNS queries are valid. First, you need to ensure that the DNS servers that your DNS clients connect to are actually the DNS servers you want the DNS clients to connect to – and that they are not rogue or attacker DNS servers that are sending spoofed responses. IPsec is an effective way to ensure the identity of the DNS server. DNSSEC uses SSL to confirm that the connection is secure. The DNS server authenticates itself via a certificate that is signed by a trusted issuer (such as your private PKI).

Keep in mind that if you have IPsec enforced server and domain isolation in force, you must exempt TCP and UDP ports 53 from the policy. Otherwise, IPsec policy will be used instead of certificate based authentication. This will cause the client to fail certificate validation from the DNS server and the secure connection will not be established.

Signed zones

DNSSEC also signs zones, using offline signing with the dnscmd.exe tool. This results in a signed zone file. The signed zone file contains the RRSIG, DNSKEY, DNS and NSEC resource records for that zone. After the zone is signed, it has to be reloaded using the dnscmd.exe tool or the DNS manager console.

One limitation of signing zones is that dynamic updates are disabled. Windows Server 2008 R2 enables DNSSEC for static zones only. The zone must be resigned each time a change is made to the zone, which may severely limit the utility of DNSSEC in many environments.

The Role of Trust Anchors

Trust anchors were mentioned earlier. DNSKEY resource records are used to support trust anchors. A validating DNS server must include at least one trust anchor. Trust anchors also apply only to the zone that they are assigned. If the DNS server hosts several zones, then multiple trust anchors are used.

The DNSSEC enabled DNS server performs validation for a name in a client query as long as the trust anchor is in place for that zone. The client doesn’t need to be DNSSEC aware for the validation to take place, so that non-DNSSEC aware DNS clients can still use this DNS server to resolve names on the intranet.

NSEC/NSEC3

NSEC and NSEC3 are methods that can be used to provide authenticated denial of existence for DNS records. NSEC3 is an improvement on the original NSEC specification that allows you to prevent “zone walking”, which allows an attacker to retrieve all the names in the DNS zone. This is a powerful tool that attackers can use to reconnoiter your network. This capability is not available in Windows Server 2008 R2, as only support for NSEC is included.

However, there is limited support for NSEC3:

  • Windows Server 2008 R2 can host a zone with NSEC that has NSEC3 delegations. However, the NSEC3 child zones are not hosted on Windows DNS servers
  • Windows Server 2008 R2 can be a non-authoritative DNS server configured with a trust anchor for a zone that is signed with NSEC and has NSEC3 child zones.
  • Windows 7 clients can use a non-Microsoft DNS server for DNS name resolution when that server is NSEC3 aware
  • When a zone is signed with NSEC, you can configure the Name Resolution Policy Table to not require validation for the zone. When you do this, the DNS server will not perform validation and will return the response with the Active Directory bit clear

Deploying DNSSEC

To deploy DNSSEC, you will need to do the following:

  • Understand the key concepts of DNSSEC
  • Upgrade your DNS servers to Windows Server 2008 R2
  • Review zone signing requirements, choose a key rollover mechanism, and identify the secure computers and DNSSEC protected zones
  • Generate and backup the keys that sign your zones. Confirm that DNS is still working and answering queries after signing the zones
  • Distribute your trust anchors to all non-authoritative servers that will perform DNS validation using DNSSEC
  • Deploy certificates and IPsec policy to your DNS server
  • Configure the NRPT settings and deploy IPsec policy to client computers

For more information on deploying a secure DNS designing using Windows Server 2008 R2, go here.

Summary

In this article, we provided a high level overview of DNSSEC and discussed the reasons that securing your DNS infrastructure is important to your organization. Windows Server 2008 R2 introduces new features that help make your DNS infrastructure more secure than ever, through the combined used of signed DNS zones, SSL secured connections to trusted DNS servers, and IPsec authentication and encryption. In a future article, we’ll take apart the DNSSEC solution in more detail and look at the specifics of the new resource records, the signing process, and the client/server interactions that take place between a DNSSEC client and server

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »

Enable and configure Windows PowerShell Remoting using Group Policy

Posted by Alin D on October 11, 2010

As you may know, Windows PowerShell 2.0 introduced a new remoting feature, allowing for remote management of computers.

While this feature can be enabled manually (or scripted) with the PowerShell 2.0 cmdlet Enable-PSRemoting, I would recommend using Group Policy whenever possible. This guide will show you how this can be accomplished for Windows Vista, Windows Server 2008 and above. For Windows XP and Windows Server 2003, running Enable-PSRemoting in a PowerShell startup script would be the best approach.

Windows PowerShell 2.0 and WinRM 2.0 shipped with Windows 7 and Windows Server 2008 R2. To take advantage of Windows PowerShell Remoting, both of these are required on the downlevel operating systems Windows XP, Windows Server 2003, Windows Vista and Windows Server 2008. Both Windows PowerShell 2.0 and WinRM 2.0 are available for download here, as part of the Windows Management Framework (Windows PowerShell 2.0, WinRM 2.0, and BITS 4.0). To deploy this update to downlevel operating systems I would recommend to use WSUS, which are described in detail in this blog post.

Group Policy Configuration

Open the Group Policy Management Console from a domain-joined Windows 7 or Windows Server 2008 R2 computer.

Create or use an existing Group Policy Object, open it, and navigate to Computer Configuration->Policies->Administrative templates->Windows Components

Here you will find the available Group Policy settings for Windows PowerShell, WinRM and Windows Remote Shell:

image

To enable PowerShell Remoting, the only setting we need to configure are found under “WinRM Service”, named “Allow automatic configuration of listeners”:

image

Enable this policy, and configure the IPv4 and IPv6 addresses to listen on. To configure WinRM to listen on all addresses, simply use *.

In addition, the WinRM service are by default not started on Windows client operating systems. To configure the WinRM service to start automatically, navigate to Computer ConfigurationPoliciesWindows SettingsSecurity SettingsSystem ServicesWindows Remote Management, doubleclick on Windows Remote Management and configure the service startup mode to “Automatic”:



No other settings need to be configured, however, I`ve provided screenshots of the other settings so you can see what`s available:

image

image

image

image

There is one more thing to configure though; the Windows Firewall.

You need to create a new Inbound Rule under Computer Configuration->Policies->Windows Settings->Windows Firewall with Advanced Security->Windows Firewall with Advanced Security->Inbound Rules:

image

The WinRM port numbers are predefined as “Windows Remote Management”:

image

With WinRM 2.0, the default http listener port changed from TCP 80 to TCP 5985. The old port number are a part of the predefined scope for compatibility reasons, and may be excluded if you don`t have any legacy WinRM 1.1 listeners.

image

image

When the rule are created, you may choose to make further restrictions, i.e. to only allow the IP addresses of your management subnet, or perhaps some specific user groups:

image

Now that the firewall rule are configured, we are done with the minimal configuration to enable PowerShell Remoting using Group Policy.

image

On a computer affected by the newly configured Group Policy Object, run gpupdate and see if the settings were applied:

image

As you can see, the listener indicates “Source*”GPO”, meaning it was configured from a Group Policy Object.

When the GPO have been applied to all the affected computers you are ready to test the configuration.

Here is a sample usage of PowerShell Remoting combined with the Active Directory-module for Windows PowerShell:

image

The example are saving all computer objects in the Domain Controller Organization Unit in a variable. Then, a foreach-loop are invoking a scriptblock, returning the status of the Netlogon-service on all of the Domain Controllers.

Summary

We`ve now had a look on how to enable and configure PowerShell Remoting using Group Policy.
There are an incredible number of opportunities opening up with the new Remoting feature in Windows PowerShell 2.0

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »

Reasons to Upgrade Your DNS Server to Windows Server 2008 R

Posted by Alin D on October 7, 2010

Introduction

DNS is the backbone of network communications. Without DNS you would be forced to memorize the IP addresses of all the clients and servers on your network. That might have been something you could have done in 1985, but it’s really not realistic as we enter into the second decade of the 21st century. And DNS is going to be even more important as we slowly transition from IPv4 to IPv6. While some talented administrators could realistically remember the dotted quad addresses for dozens or maybe even hundreds of servers, that just isn’t going to happen with IPv6; where the IP addresses are 128bit hexadecimal numbers. IPv6 is going to bring DNS back to the forefront of your awareness.

Because DNS is going to be ever more important, you’re going to need to be sure that your DNS server solution is secure. Historically, there was a large amount of implicit trust in DNS deployments. There was an implicit trust that the DNS client could trust the DNS server, and there was implicit trust that the records returned from the DNS server to the DNS client were valid. While this “gentleman’s agreement” has worked reasonably well for the last few decades, the time has come when we need to be able to guarantee that the information provided by the DNS server is valid and that client/server DNS communications are secure.

This has me thinking about the Windows Server 2008 R2 DNS server. There are several new features in the Windows Server 2008 R2 DNS server that you can use to improve the overall security of your DNS infrastructure. These include:

  • DNS Security Extensions (DNSSEC)
  • Control over DNS devolution behavior
  • DNS cache locking
  • DNS Socket Pool

In this article, I’m going to provide you a brief overview of each of these features and how you can use them to create a more secure DNS for your network.

DNS Security Extensions (DNSSEC)

DNSSEC is a group of specifications from the Internet Engineering Task Force (IETF) that provide for origin authentication of DNS data, authenticated denial of existence and data integrity (not data confidentiality). The purpose of DNSSEC is to protect against forged DNS information (for example, DNS cache poisoning), by using digital signatures.DNSSEC is actually a collection of new features added to the DNS client/server interaction that help increase the security of the basic DNS protocols. The core DNSSEC features are specified in:

  • RFC 4033
  • RFC 4034
  • RFC 4035

DNSSEC introduces several new terms and technologies on both the client and server side. For example, DNSSEC adds four new DNS resource records:

  • DNSKEY
  • RRSIG
  • NSEC
  • DS

Windows Server 2008 R2 Implementation

Windows Server 2008 R2 and Windows 7 are the first Microsoft operating systems to support DNSSEC. You can now sign and host DNSSEC signed zones to increase the level of security for your DNS infrastructure. The following DNSSEC related features are introduced in Windows Server 2008 R2:

  • The ability to sign a zone (that is, to provide the zone a digital signature)
  • The ability to host signed zones
  • New support for the DNSSEC protocol
  • New support for DNSKEY, RRSIG, NSEC, and DS resource records.

DNSSEC can add origin authority (confirmation and validation of the original of the DNS information presented to the DNS client), data integrity (provide assurance that the data has not been changed), and authenticated denial of existence to DNS (a signed response confirming that the record does not exist).

Windows 7/Server 2008 R2 DNS Client Improvements

In addition to the DNS server updates in Windows Server 2008 R2, there are some improvements in the Windows 7 DNS client (which also includes the DNS client service in Windows Server 2008 R2):

  • The ability to communicate awareness of DNSSEC in DNS queries (which is required if you decide to used signed zones)
  • The ability to process the DNSKEY, RRSIG, NSEC, and DS resource records.
  • The ability to determine if the DNS server with to which it had sent a DNS query has performed validation for the client.

DNSSEC and the NRPT

If you’re acquainted with DirectAccess, you might be interested in the fact that DNSSEC leverages the Name Resolution Policy Table (NRPT). The DNS client DNSSEC related behavior is set by the NRPT. The NRPT enables you to create a type of policy based routing for DNS queries. For example, you can configure the NRPT to send queries for contoso.com to DNS server 1, while queries for all other domains are sent to the DNS server address configured on the DNS client’s network interface card. You configure the NRPT in Group Policy. The NRPT is also used to enable DNSSEC for defined namespaces, as seen in Figure 1 below.


Figure 1

Understanding how DNSSEC works

A key feature of DNSSEC is that it enables you to sign a DNS zone – which means that all the records for that zone are also signed.The DNS client can take advantage of the digital signature added to the resource records to confirm that they are valid. This is typical of what you see in other areas where you have deployed services that depend on PKI. The DNS client can validate that the response hasn’t been changed using the public/private key pair. In order to do this, the DNS client has to be configured to trust the signer of the signed zone.

The new Windows Server 2008 R2 DNSSEC support enables you to sign file-based and Active Directory integrated zones through an offline zone signing tool. I know it would have been easier to have a GUI interface for this, but I guess Microsoft ran out of time or figured that not enough people would actually use this feature to make it worthwhile to make the effort to create a convenient graphical interface for signing a zone. The signing process is also done off-line. After the zone is signed, it can be hosted by other DNS servers using typical zone transfer methodologies.

When configured with a trust anchor, a DNS server is able to validate DNSSEC responses received on behalf of the client. However, in order to prove that a DNS answer is correct, you need to know at least one key or DS record that is correct from sources other than the DNS. These starting points are called trust anchors.

Another change in the Windows 7 and Windows Server 2008 R2 DNS client is that it acts as a security-aware stub resolver. This means that the DNS client will let the DNS server handle the security validation tasks, but it will consume the results of the security validation efforts performed by the DNS server. The DNS clients take advantage of the NRPT to determine when they should check for validation results. After the client confirms that the response is valid, it will return the results of the DNS query to the application that triggered the initial DNS query.

Using IPsec with DNSSEC

In general, it’s a good idea to use IPsec to secure communications between all machines that participate on your managed network. The reason for this is that it’s very easy for an intruder to put network analysis software on your network and intercept and read any non-encrypted content that moves over the wire. However, if you use DNSSEC, you’ll need to be aware of the following when crafting your IPsec policies:

  • DNSSEC uses SSL to secure the connection between the DNS client and server. There are two advantages of using SSL: first, it encrypts the DNS query traffic between the DNS client and DNS server, and second, it allows the DNS client to authenticate the identity of the DNS server, which helps ensure that the DNS server is a trusted machine and not a rogue.
  • You need to exempt both TCP port 53 and UDP port 53 from your domain IPsec policy. The reason for this is that the domain IPsec policy will be used and DNSSEC certificate-based authentication will not be performed. The end result is that the client will fail the EKU validation and end up not trusting the DNS server.

Control Over DNS Devolution

DNS devolution has been available for a long time in Windows DNS clients. No, it doesn’t mean that the operating systems are less evolved. Devolution allows your client computers that are members of a subdomain to access resources in the parent domain without the need to provide the exact FQDN for the resource.

For example, if the client uses the primary DNS suffix corp.contoso.com and devolution is enabled with a devolution level of two, an application attempting to query the host name server1 will attempt to resolve:

  • server1.corp.contoso.com and
  • server1.corp.com

Notice that when the devolution level is set to two, the devolution process stops when there are two labels for the domain name (in this case, corp.com).

Now, if the devolution level were set to three, the devolution process would stop with server1.corp.contoso.com, since server1.contoso.com only has two labels in the domain name (contoso.com).

However, devolution is not enabled in Active Directory domains when:

  1. There is a global suffix search list assigned by Group Policy.
  2. The DNS client does not have the Append parent suffixes of the primary DNS suffix check box selected on the DNS tab in the Advanced TCP/IP Settings for IPv4 or IPv6 Internet Protocol (TCP/IP) Properties of a client computer’s network connection, as shown in Figure 2. Parent suffixes are obtained by devolution.


Figure 2

Previous versions of Windows had an effective devolution level of two. What’s new in Windows Server 2008 R2 is that you can now define your own devolution level, which gives you more control over the organizational boundaries in an Active Directory domain when clients try to resolve names in the domain. You can set the devolution level using Group Policy, as seen in Figure 3 below (Computer ConfigurationPoliciesAdministrative TemplatesNetworkDNS Client).


Figure 3

DNS Cache Locking

Cache locking in Windows Server 2008 R2 enables you to control the ability to overwrite information contained in the DNS cache. When DNS cache locking is turned on, the DNS server will not allow cached records to be overwritten for the duration of the time to live (TTL) value. This helps protect your DNS server from cache poisoning. You can also customize the settings used for cache locking.

When a DNS server configured to perform recursion receives a DNS request, it caches the results of the DNS query before returning the information to the machine that sent the request. Like all caching solutions, the goal is to enable the DNS server to provide information from the cache with subsequent requests, so that it won’t have to take the time to repeat the query. The DNS server keeps the information in the DNS server cache for a period of time defined by the TTL on the resource record. However, it is possible for information in the cache to be overwritten if new information about that resource record is received by the DNS server. One scenario where this might happen is when an attacker attempts to poison your DNS cache. If the attacker is successful, the poisoned cache might return false information to DNS clients and send the clients to servers owned by the attacker.

Cache locking is configured as a percentage of the TTL. For example, if the cache locking value is set to 25, then the DNS server will not overwrite a cached entry until 25% of the time defined by the TTL for the resource record has passed. The default value is 100, which means that the entire TTL must pass before the cached record can be updated. The cache locking value is stored in theCacheLockingPercent registry key. If the registry key is not present, then the DNS server will use the default cache locking value of 100. The preferred method of configuring the cache locking value is through the dnscmd command line tool.

An example of how to configure cache locking is seen in Figure 4 below. The percent value can range from 0 to 100.


Figure 4

Swimming in the Windows Server 2008 R2 DNS Socket Pool

OK, so you can’t swim in a socket pool. But what you can do with the Windows Server 2008 R2 DNS socket pool is enable the DNS server to use source port randomization when issuing DNS queries. Why would you want to do this? Because the source port randomization provides protection against some types of cache poisoning attacks, such as those described over here.

The initial fix included some default settings, but with Windows Server 2008 R2 you can customize socket pool settings.

Source port randomization protects against DNS cache poisoning attacks. With source port randomization, the DNS server will randomly pick a source port from a pool of available sockets that it opens when the service starts. This helps prevent an unauthenticated remote attacker from sending specially crafted responses to DNS requests in order to poison the DNS cache and forward traffic to locations that are under the control of an attacker.

Previous versions of the Windows DNS server used a predictable collection of source ports when issuing DNS query requests. With the new DNS socket pool, the DNS server will use a random port number selected from the socket pool. This makes it much more difficult for an attacker to guess the source port of a DNS query. To further thwart  the attacker, a random transaction ID is added to the mix, making it even more difficult to execute the cache poisoning attack.

The socket pool starts with a default of 2500 sockets. However, if you want to make things even tougher for attackers, you can increase it up to a value of 10,000. The more sockets you have available in the pool, the harder it’s going to be to guess which socket is going to be used, thus frustrating the cache poisoning attacker. On the other hand, you can configure the pool value to be zero. In that case, you’ll end up with a single socket value that will be used for DNS queries, something you really don’t want to do. You can even configure certain ports to be excluded from the pool.

Like the DNS cache feature, you configure the socket pool using the dnscmd tool. The figure below shows you an example using the default values.


Figure 5

Summary

In this article we went over several new features included in the Windows Server 2008 R2 server and Windows 7 DNS client that increase the security and performance of your DNS infrastructure. The combination of DNSSEC, improvements in control over DNS devolution, security enhancements in the DNS cache and the DNS socket pool all provide compelling reasons to upgrade your DNS servers to Windows Server 2008 R2.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Exchange 2010 Database Availability Group

Posted by Alin D on October 5, 2010

Database Availability Group (DAG) is the new Exchange 2010 high availability feature. This feature provides data availability together with service availability. DAG now is the only built-in way to protect data in Exchange 2010.

This article is composed of an introductory section where we look at the key facts of Database Availability Groups and a walk through the implementation of DAG. The introductory sections where authored after researching various TechNet articles and here I am reproducing salient information taken from TechNet. This information was restructured for Administrators to have a central reference point rather than having to go through many TechNet articles. So credits for the article introduction go to TechNet.

In Exchange 2007, we have Local Continuous Replication (LCR), Cluster Continuous Replication (CCR), Single Copy Clusters (SCC) and Standby Continuous Replication (SCR). Exchange 2010 combined on-site data replication (CCR) and off-site data replication (SCR) to produce one method to protect mailbox databases.

DAG is a group of up to 16 mailbox servers that host a set of databases and provide automatic database-level recovery from failures that affect individual servers or databases. Those mailbox servers can be geographically dispersed to replicate mailbox databases across sites. Any server in a DAG can host a copy of a mailbox database from any other server in the DAG.

Storage groups no longer exist in Exchange 2010. The mailbox database name is unique within an Exchange 2010 organization, they are now global objects and, as a result, the primary management interfaces for Exchange databases has moved within the Exchange Management Console from the Mailbox node under Server Configuration to the Mailbox node under Organization Configuration. Also continuous replication now operates at the database level because of Storage Group removal from Exchange 2010.

Mailbox  Databases under Organization Configuration

In Exchange 2007 the Microsoft Exchange Replication service on the passive node connects to the share on the active node and copies, or pulls, the log files using the Server Message Block (SMB) protocol. In Exchange 2010 SMB is no longer used for Log shipping and seeding. Instead, Exchange 2010 continuous replication uses a single administrator-defined TCP port, by default DAG uses port 64327. Also, Log shipping no longer uses a pull model where the passive copy pulls the closed log files from the active copy; now the active copy pushes the log files to each configured passive copy.

Another good enhancement is that seeding is no longer restricted to using only the active copy of the database. Passive copies of mailbox databases can now be specified as sources for database copy seeding and reseeding. In addition, Exchange 2010 includes built-in options for network encryption and compression for the data stream.

There are two editions of Exchange 2010, standard and enterprise editions. Both editions include DAGs, but standard edition is limited to 5 databases per server while the enterprise edition can host up to 100 databases per server. Note that if you want to use DAG with failover clustering, you have to install Exchange 2010 on the enterprise editions of Windows Server 2008. And all DAG members should run the same operating system, either Windows Server 2008 on all members or Windows Server 2008 R2 on all members.

Creating and Configuring DAG

There are specific networking requirements that must be met for each DAG and for each DAG member. Each DAG has a single MAPI network, which is used by other servers (e.g., other Exchange 2010 servers, directory servers, witness servers, etc.) to communicate with the DAG member, and zero or more Replication networks, which are networks that are dedicated to log shipping and seeding. However, unlike previous Exchange versions, database availability group configuration is supported using single network.

An IP address (either IPv4 or both IPv4 and IPv6) must be assigned to the DAG. This IP address must be on the subnet intended for the MAPI network.

You can assign static IP addresses to the DAG by using the DatabaseAvailabilityGroupIpAddresses parameter. If you use the Exchange Management Console (EMC) to create the DAG, or if you use the New-DatabaseAvailabilityGroup cmdlet without the DatabaseAvailabilityGroupIpAddresses parameter, the task will configure the DAG to use Dynamic Host Configuration Protocol (DHCP) to obtain the necessary IP addresses. If you don’t want the DAG to use DHCP, you can use the Set-DatabaseAvailabilityGroup cmdlet to configure one or more IP addresses for the DAG after it has been created.

Now we will create the DAG. In EMC | Organization Configuration | Mailbox. Click the Database Availability Groups tab, right-click and select New Database Availability Group:

New Database  Availability Group

Type a name for the DAG. Remember that the DAG must have a unique name inside the Exchange organization and it can consist of up to 15 characters. I will select the server that hosts the Hub Transport and Client Access server roles as a witness server, and define C:DAG1-WS as the witness directory

New Database  Availability Group - Configuration

Click Next to start creating the DAG:

New Database  Availability Group - Finished

After DAG has been created, we can run the command “Get-DatabaseAvailabilityGroup DAG1 | fl” to see the default properties of the DAG:

Get-DatabaseAvailabilityGroup

Note that the DAG has no IP addresses configured. I don’t have DHCP in my test environment, so we have to configure an IP address for the DAG. To do so, we will use the command:
Set-DatabaseAvailabilityGroup DAG1 -DatabaseAvailabilityGroupIpAddresses 20.20.0.6

DatabaseAvailabilityGroupIpAddresses

Now you can add servers to the DAG. In EMC | Organization Configuration | Mailbox, click the Database Availability Groups tab, right-click the DAG you want to manage, and then click Manage Database Availability Group Membership:

Manage Database  Availability Group Membership

Click Add then select the servers you want to add. I will chose one server then add the second server using Exchange Management Shell.

Manage Database  Availability Group Membership - Add Server

Click Manage to add the server as a member to the DAG

Manage Database  Availability Group Membership - Finished

Now we will configure the DAG networks to allow the replication on one subnet other than the MAPI network subnet. From the Database Availability Groups tab, select the DAG. At the bottom pane we can then configure the network properties for the selected DAG.

DAG Networks

I will add an IPv4 subnet and remove the IPv6 subnet. Also make sure that the Enable replication check box is selected to allow the replication to happen over this network

DAG Network  Properties

Next we will disable the replication on the MAPI network. Open the properties of the second network which is configured with the IP of your internal network, uncheck Enable replication check box

Disable  Replication

Now we will add the second server using the command:
Add-DatabaseAvailabilityGroupServer -Identity DAG1 -MailboxServer Ex14Mbx2 -Verbose

Add-DatabaseAvailabilityGroupServer

The -Verbose parameter instructs the command to provide detailed information about the operation.

To check the memebers after we have two members in the DAG, we can use the command:
Get-DatabaseAvailabilityGroup

Get-DatabaseAvailabilityGroup

Adding Mailbox Database Copies

Now that we have configured the DAG, we will continue by adding mailbox database copies to start protecting our databases.

We will configure the following scenario:
In all we have two mailbox servers, Ex14Mbx1 and Ex14Mbx2, with two mailbox databases, Main-DB01 and Main-DB02. Ex14Mbx1 holds the active Main-DB01 database copy and a passive copy of Main-DB02, the same applies for Ex14Mbx2; it holds the active Main-DB02 database copy and a passive copy of Main-DB01.

Completed DAG  Setup

From EMC | Organization Configuration | Mailbox, click the Database Management tab, and right-click the database for which we want to add a copy

Add Mailbox  Database Copy

In the Add Mailbox Database Copy window click browse and select the DAG member that you will configure to host the database copy

Add Mailbox  Database Copy - Configuration

In the Add Mailbox Database Copy window, there is an Activation preference number. This value is used when multiple database copies are added for one database and all the copies meet the same criteria for activation. In this case the copy assigned the lowest activation preference number will be activated.

Click add and wait for the command to complete successfully

Add Mailbox  Database Copy - Finished

After the copy has been created, we can check the health of the database copy using Exchange Management Console. In Exchange 2007 we had to use the Exchange Management Shell to check for mailbox databases and replication health. Now we can use the Database Management tab and look at the Copy Status colomn

Database  Management - Copy Status

Mailbox Database Switchover

The Mailbox server that hosts the active copy of a database is called the mailbox database master. Sometimes you may need to take the mailbox database master down for maintenance. In this case we need to move the active mailbox database to another mailbox server. This process is called a database switchover. In a database switchover, the active copy of a database is dismounted on the master and a passive copy of that database is mounted. The active mailbox database is mounted on another mailbox server which in its turn becomes the master.

To activate the mailbox database on another server, in EMC | Organization Configuration | Mailbox, click the Database Management tab, at the bottom pane right-click the copy that is hosted on the server on which you want to activate the copy

Activate  Database Copy

The following drop down list will appear to select from:

Activate  Database -  Override Mount

The options in the list are:

  • Lossless If you specify this value, the database doesn’t automatically mount until all logs that were generated on the active copy have been copied to the passive copy.
  • Good Availability If you specify this value, the database automatically mounts immediately after a failover if the copy queue length is less than or equal to 6. Exchange will attempt to replicate the remaining logs to the passive copy and then mounts the database. If the copy queue length is greater than 6, the database doesn’t mount.
  • Best Effort If you specify this value, the database automatically mounts regardless of the size of the copy queue length. Because the database will mount with any amount of log loss, using this value could result in a large amount of data loss.
  • Best Availability If you specify this value, the database automatically mounts immediately after a failover if the copy queue length is less than or equal to 12. The copy queue length is the number of logs recognized by the passive copy that needs to be replicated. If the copy queue length is more than 12, the database doesn’t automatically mount. When the copy queue length is less than or equal to 12, Exchange attempts to replicate the remaining logs to the passive copy and then mounts the database.

Click ok to start activating the copy on the second server. When the process finishes we can see the results in the console:

Activate  Database - Complete

We can also activate the mailbox database copy on another server through Exchange Management Shell using the command:
Move-ActiveMailboxDatabase -Identity Main-DB02 -ActivateOnServer Ex14Mbx1

Move-ActiveMailboxDatabase

Conclusion

In this article we went through a brief overview of database availability groups. We introduced DAG, created and configured DAG to include two member servers. We created mailbox database copies within the DAG and tested moving the database copies between member servers.

Posted in Exchange | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »

10 Core Concepts that Every Windows Network Admin Must Know

Posted by Alin D on September 13, 2010

Introduction

I thought that this article might be helpful for Windows Network Admins out there who need some “brush-up tips” as well as those who are interviewing for network admins jobs to come up with a list of 10 networking concepts that every network admin should know.

So, here is my list of 10 core networking concepts that every Windows Network Admin (or those interviewing for a job as one) must know:

1.     DNS Lookup

The domain naming system (DNS) is a cornerstone of every network infrastructure. DNS maps IP addresses to names and names to IP addresses (forward and reverse respectively). Thus, when you go to a web-page like http://www.windowsnetworking.com, without DNS, that name would not be resolved to an IP address and you would not see the web page. Thus, if DNS is not working “nothing is working” for the end users.

DNS server IP addresses are either manually configured or received via DHCP. If you do an IPCONFIG /ALL in windows, you will see your PC’s DNS server IP addresses.


Figure 1: DNS Servers shown in IPCONFIG output

So, you should know what DNS is, how important it is, and how DNS servers must be configured and/or DNS servers must be working for “almost  anything” to work.

When you perform a ping, you can easily see that the domain name is resolved to an IP (shown in Figure 2).


Figure 2: DNS name resolved to an IP address

For more information on DNS servers, see Brian Posey’s article on DNS Servers.

2.     Ethernet & ARP

Ethernet is the protocol for your local area network (LAN). You have Ethernet network interface cards (NIC) connected to Ethernet cables, running to Ethernet switches which connect everything together. Without a “link light” on the NIC and the switch, nothing is going to work.

MAC addresses (or Physical addresses) are unique strings that identify Ethernet devices. ARP (address resolution protocol) is the protocol that maps Ethernet MAC addresses to IP addresses. When you go to open a web page and get a successful DNS lookup, you know the IP address. Your computer will then perform an ARP request on the network to find out what computer (identified by their Ethernet MAC address, shown in Figure 1 as the Physical address) has that IP address.

3.     IP Addressing and Subnetting

Every computer on a network must have a unique Layer 3 address called an IP address. IP addresses are 4 numbers separated by 3 periods like 1.1.1.1.

Most computers receive their IP address, subnet mask, default gateway, and DNS servers from a DHCP server. Of course, to receive that information, your computer must first have network connectivity (a link light on the NIC and switch) and must be configured for DHCP.

You can see my computer’s IP address in Figure 1 where it says IPv4 Address 10.0.1.107. You can also see that I received it via DHCP where it says DHCP Enabled YES.

Larger blocks of IP addresses are broken down into smaller blocks of IP addresses and this is called IP subnetting. I am not going to go into how to do it and you do not need to know how to do it from memory either (unless you are sitting for a certification exam) because you can use an IP subnet calculator, downloaded from the Internet, for free.

4.     Default Gateway

The default gateway, shown in Figure 3 as 10.0.1.1, is where your computer goes to talk to another computer that is not on your local LAN network. That default gateway is your local router. A default gateway address is not required but if it is not present you would not be able to talk to computers outside your network (unless you are using a proxy server).


Figure 3: Network Connection Details

5.     NAT and Private IP Addressing

Today, almost every local LAN network is using Private IP addressing (based on RFC1918) and then translating those private IPs to public IPs with NAT (network address translation). The private IP addresses always start with 192.168.x.x or 172.16-31.x.x or 10.x.x.x (those are the blocks of private IPs defined in RFC1918).

In Figure 2, you can see that we are using private IP addresses because the IP starts with “10”. It is my integrated router/wireless/firewall/switch device that is performing NAT and translating my private IP to my public Internet IP that my router was assigned from my ISP.

6.     Firewalls

Protecting your network from malicious attackers are firewalls. You have software firewalls on your Windows PC or server and you have hardware firewalls inside your router or dedicated appliances. You can think of firewalls as traffic cops that only allow certain types of traffic in that should be in.

For more information on Firewalls, checkout our Firewall articles.

7.     LAN vs WAN

Your local area network (LAN) is usually contained within your building. It may or may not be just one IP subnet. Your LAN is connected by Ethernet switches and you do not need a router for the LAN to function. So, remember, your LAN is “local”.

Your wide area network (WAN) is a “big network” that your LAN is attached to. The Internet is a humongous global WAN. However, most large companies have their own private WAN. WANs span multiple cities, states, countries, and continents. WANs are connected by routers.

8.     Routers

Routers route traffic between different IP subnets. Router work at Layer 3 of the OSI model. Typically, routers route traffic from the LAN to the WAN but, in larger enterprises or campus environments, routers route traffic between multiple IP subnets on the same large LAN.

On small home networks, you can have an integrated router that also offers firewall, multi-port switch, and wireless access point.

For more information on Routers, see Brian Posey’s Network Basics article on Routers.

9.     Switches

Switches work at layer 2 of the OSI model and connect all the devices on the LAN. Switches switch frames based on the destination MAC address for that frame. Switches come in all sizes from small home integrated router/switch/firewall/wireless devices, all the way to very large Cisco Catalyst 6500 series switches.

10. OSI Model encapsulation

One of the core networking concepts is the OSI Model. This is a theoretical model that defines how the various networking protocols, which work at different layers of the model, work together to accomplish communication across a network (like the Internet).

Unlike most of the other concepts above, the OSI model isn’t something that network admins use every day. The OSI model is for those seeking certifications like the Cisco CCNA or when taking some of the Microsoft networking certification tests. OR, if you have an over-zealous interviewer who really wants to quiz you.

To fulfill those wanting to quiz you, here is the OSI model:

  • Application – layer 7 – any application using the network, examples include FTP and your web browser
  • Presentation – layer 6 – how the data sent is presented, examples include JPG graphics, ASCII, and XML
  • Session – layer 5 – for applications that keep track of sessions, examples are applications that use Remote Procedure Calls (RPC) like SQL and Exchange
  • Transport – layer 4 -provides reliable communication over the network to make sure that your data actually “gets there” with TCP being the most common transport layer protocol
  • Network – layer 3 -takes care of addressing on the network that helps to route the packets with IP being the most common network layer protocol. Routers function at Layer 3.
  • Data Link – layer 2 -transfers frames over the network using protocols like Ethernet and PPP. Switches function at layer 2.
  • Physical – layer 1 -controls the actual electrical signals sent over the network and includes cables, hubs, and actual network links.

At this point, let me stop degrading the value of the OSI model because, even though it is theoretical, it is critical that network admins understand and be able to visualize how every piece of data on the network travels down, then back up this model. And how, at every layer of the OSI model, all the data from the layer above is encapsulated by the layer below with the additional data from that layer. And, in reverse, as the data travels back up the layer, the data is de-encapsulated.

By understanding this model and how the hardware and software fit together to make a network (like the Internet or your local LAN) work, you can much more efficiently troubleshoot any network. For more information on using the OSI model to troubleshoot a network, see my articles Choose a network troubleshooting methodology and How to use the OSI Model to Troubleshoot Networks.

Summary

I can’t stress enough that if you are interviewing for any job in IT, you should be prepared to answer networking questions. Even if you are not interviewing to be a network admin, you never know when they will send a senior network admin to ask you a few quiz questions to test your knowledge. I can tell you first hand, the questions above are going to be the go-to topics for most network admins to ask you about during a job interview. And, if you are already a windows network admin, hopefully this article serves as an excellent overview of the core networking concepts that you should know. While you may not use these every day, knowledge of these concepts is are going to help you troubleshoot networking problems faster.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

How to migrate your existing Active Directory to Windows Server 2008

Posted by Alin D on August 19, 2010

This is a brief How To guide (the first of many) on how to migrate your existing Active Directory to Windows Server 2008.

Please note that I cannot be held responsible for any issues that you encounter when following this guide, my upgrade was done in a lab environment on a single Domain Controller running Exchange 2003.

If you do follow this and do it on a live system please, please, please run a full back up of your domain controllers and verify that the backup was successful. Even though this is a straight forward upgrade if anything goes wrong during the upgrade, you could potentially be left with a domain that NO users can logon to.

Before you start upgrading

verify that your domain controllers meet these requirements:

  • The hardware meets or exceeds the requirements for Windows Server 2008.
  • All hardware and software is compatible with Windows Server 2008, including antivirus software and drivers.
  • You have ample disk space to perform the install.
  • The current domain functional level is Windows 2000 Native or Windows Server 2003. You cannot upgrade directly from Windows NT 4.0, Windows 2000 Mixed or Windows Server 2003 Interim domain functional levels.
  • All Windows 2000 Server domain controllers have Service Pack 4 installed.

Test your domain

Active Directory domains are very resilient and can continue to function even when a there are various problems e. Even if your Active Directory seems to be working properly, you might have logon delays, replication failures or Group Policy settings that aren’t being applied. These conditions can cause problems during an upgrade, so it’s crucial to resolve them now.

These tools will help you identify and diagnose any problems:

  • Dcdiag.exe. Run this tool to analyse your Active Directory for common problems; it’s included with Windows Server 2003 and Windows Server 2008.
  • Repadmin.exe. Use Repadmin.exe to identify Active Directory replication problems; it’s included with Windows Server 2003 and Windows Server 2008.
  • Gpotool.exe. Use this tool to verify that Group Policy is consistent among domain controllers, it’s included with the Windows Server 2003 Resource Kit tools, available at http://go.microsoft.com/fwlink/?linkid=27766.
  • Event Viewer. Review the Directory Services log file for errors that might indicate problems.

Prepare Your Schema

If you upgraded from Windows 2000 to Windows Server 2003 you will be familiar with the Adprep.exe tool that was located on the Windows Server 2003 CD to prepare your Forest and Domain Schema. To prepare the Schema for Windows Server 2008 you will need to run the adprep tool from the Server 2008 DVD. This is located in the SoucesADprep Folder on the CD.

Run the following Command to prepare your domain for 2008:

Adprep /forestprep
Adprep /domainpre

Adprep /domainprep /gpprep

Adprep /rodcprep

If you get an error during the Adprep /domainprep about the domain not being in native mode you need to raise the level of your domain and then re-run domainprep. To raise the level of your domain go into Active Directory Domains and Trusts. Right click on the domain and select Raise Domain Function Level… 

Once you have finished running the Adprep on you domain controller, join your new Windows Server 2008 Server to your domain make sure that you have a static IP assigned to the server I am using IPv4 as to be honest know nothing about IPv6 just now, so when running dcpromo click yes to the prompt about the Static IP assignment.

Then once that has done you will have a functioning Windows Server 2008 Active Directory Server.

The ScreenCast video content presented here requires JavaScript to be enabled and the latest version of the Macromedia Flash Player. If you are you using a browser with JavaScript disabled please enable it now. Otherwise, please update your version of the free Flash Player by downloading here.

// < ![CDATA[
//

Posted in Windows 2008 | Tagged: , , , , , , , , , , , | Leave a Comment »