Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘WAN’

Data deduplication from Windows 8 improve storage

Posted by Alin D on March 17, 2012

Data deduplication is nothing new. Third-party vendors have used it for things like shrinking backup storage and WAN optimization for years. Even so, there has never been a native deduplication feature in the Windows operating system. That’s about to change, however, with the release of Windows Server 8.

Like the third-party products that have existed for so long, the goal of Windows Server 8’s deduplication feature is to allow more data to reside in less space. Notice that I did not say that the deduplication feature allows more data to be stored in less space. Even though Windows Server 8 will support storage-level deduplication, it also supports deduplication for data that is in transit.

Storage Deduplication

Even though deduplication is new to the Windows operating system, Microsoft products have used various methods of increasing storage capacity for quite some time. For instance, the Windows operating system has long supported file system (NTFS) level compression. Likewise, some previous versions of Exchange Server sought to maximize the available storage space through the use of Single Instance Storage (SIS). Although such technologies do help to decrease storage costs, neither NTFS compression nor Single Instance Storage is as efficient as Windows Server 8’s deduplication feature.

According to Microsoft’s estimates, Windows Server 8’s deduplication feature should be able to deliver an optimization ratio of 2:1 for general data storage when it ships late this year. This ratio could increase to as much as 20:1 in virtual server environments.

How Storage Deduplication Works

The reason why Windows Server 8’s deduplication feature will be more efficient than Single Instance Storage is because SIS works at the file level. In other words, if two identical copies of a file need to exist on a server then Single Instance Storage only stores a single copy of the file, but uses pointers to achieve the illusion that multiple copies of the file exist. Although this technique works really well for servers containing a lot of identical files, it doesn’t do anything for files that are similar, but not identical.

To further illustrate this point, consider the invoices that I send to my clients each month. The invoices exist as Microsoft Word documents, and each document is identical except for the date and the invoice number. Even so, Single Instance Storage would do nothing to reduce the space consumed by these documents.

Deduplication works at the block level rather than the file level. Each file is divided into small chunks. These chunks are of variable sizes, but range from 32 KB to 128 KB. Hence, a single file could be made up of many chunks.

The operating system will compute a hash for each chunk. The hash values are then compared as a way of determining which chunks are identical. When identical chunks are found, all but one copy of the chunk is deleted. The file system uses pointers to reference which chunks go with which files. One way of thinking of this process is that legacy file systems typically treat files as streams of data. However, Windows Server 8’s file system (with deduplication enabled) will treat files more as a collection of chunks.

Incidentally, the pre-beta version of Windows Server 8 uses file system compression. Whenever possible, the individual chunks of data will be compressed to save space.

Data Integrity

One of the major concerns often expressed with regard to deduplication is file integrity. Although the odds are astronomical, it is theoretically possible for two dissimilar blocks of data to have identical hashes. Some third-party products solve this problem by recalculating the hash using a different and more complex formula prior to deleting duplicate chunks as a way of verifying that the chunks really are identical.

Although Microsoft has not specified the exact method that it will use to preserve data integrity, the Windows Server 8 Developer Preview Reviewer’s Guide indicates that the operating system “leverages checksum, consistency, and identity validation to ensure data integrity.” Furthermore, the operating system uses redundancy for certain types of data chunks as a way of preventing data loss.

Bandwidth Optimization

As previously mentioned, Windows Server 8 will allow for the deduplication of both stored data and data in transit. Deduplication techniques similar to those that were previously described are going to be integrated with BranchCache as a way of minimizing the amount of data that must be transmitted over WAN links. These early builds suggest that the native deduplication feature will be able to conserve a significant amount of storage space without adversely affecting file system performance.

Posted in Windows 8 | Tagged: , , , , , , | Leave a Comment »

Virtual desktop infrastructures challenged by Windows 8 enhances

Posted by Alin D on September 26, 2011

IT pros that use server-hosted VDI to deliver Windows have some concerns about the infrastructure and client hardware requirements of the upcoming Windows 8 “Metro” style OS and applications.

Meanwhile, desktop virtualization vendors say Windows 8 will have a positive impact on the user experience and on virtual desktop adoption.The touch support and Metro style UI and applications make for a richer user experience in Windows 8 desktops — great news. But it could have a huge impact for IT shops using server-hosted virtual desktop infrastructure (VDI) to deliver Windows in both LAN and WAN environments, said Ruben Spruijt, a technology officer for PQR, an IT services firm based in the Netherlands.

“Looking at the potential impact on network, client-side requirements and possible requirements on server side while delivering Windows 8 as guest in VDI, I can imagine some design challenges with Windows 8,” Spruijt said.

Dan Bolton, a systems architect for Kingston University in London, uses RemoteFX to deliver virtualized Windows 7 desktops and applications. He’ll test Windows 8 later this year and wonders about the amount of bandwidth he will need to deliver virtual desktops with Windows 8.

“We deliver 100 Megs today and will have to review whether that is enough for our requirements,” Bolton said.

The Windows 8 Developer Preview became available this month, but virtualization products haven’t been updated to work with its high performance graphics, so developers report performance problems running it as a guest OS.

Microsoft reports that about one-third of the early installations are on virtual machines, but recommends running Windows 8 Developer Preview natively on a dedicated computer because Windows 8 relies on hardware acceleration for its user interface.

At Microsoft’s BUILD developer conference earlier this month, however, Microsoft demonstrated a virtualized Windows 8 desktop and Metro style applications being delivered via Remote Desktop Protocol (RDP), and the new features appeared to perform well.

Microsoft also presented the Hyper-V client that will be built into Windows 8.

Windows 8 server enhancements for virtual desktops
While IT pros wonder how Windows 8 will impact their virtual desktop infrastructures, desktop virtualization vendors such as Quest Software, RES Software and VMware Inc. said Windows 8 shouldn’t require big changes to existing VDI environments. In fact, those companies are already testing Windows 8 and say software and protocols updates will only improve the remote Windows user experience.

As it exists today in Windows Server 2008 R2, RemoteFX is designed to run on a LAN. But the remoting protocol comes of age in Windows Server 8 and supports a wider range of deployment scenarios.

The next version supports adaptive networks and delivery of desktops remotely using a WAN, plus full Touch device support, USB device integration and Single Sign On usage discovery, according to Microsoft.

It’s good news for IT pros that have had to invest in WAN accelerators and limit the types of desktops they virtualize due to erratic performance problems. It’s also good for desktop virtualization software vendors that have had to compensate for remote protocol performance issues by developing their own fixes.

For instance, Quest Software’s vWorkspace product is built around RDP/RemoteFX and the company had to develop a RemoteFX add-on that provides features its customers require, such as multimedia and WAN support.

“We never got into this business to be a protocol vendor, so it’s nice to see that Microsoft is taking the protocol issues off the table,” said Jon Rolls, VP of product management for Quest Software’s Desktop Virtualization Group.

Rolls said in Quest’s testing, the Windows 8 preview runs fine inside a vWorkspace virtual machine (VM). “Windows 8 is richer, but we already deliver rich applications and we already use iPad clients with touch screen,” Rolls said. “It won’t change the game as much as people think.”

Microsoft also added support for the use of cheaper storage, pooled desktops and a User Profile Disk for user personalization. Additionally, Windows 8 server will support “fair share resource allocation” to lower hardware requirements per user, according to information shared at BUILD earlier this month.

The improvements to Windows client and server will only help desktop virtualization adoption, according to Jeff Wettlaufer, RES Software’s director of product marketing (who was previously part of Microsoft’s Systems Center Group).

“When you combine the advancements in storage and the management of the virtual machines, the network performance and the RemoteFX improvements, [Windows 8] will perform better than Windows 7 does” in VDI environments, Wettlaufer said.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

How to protect your servers with Data Protection Manager

Posted by Alin D on May 31, 2011

The first step in configuring Data Protection Manager to protect the servers that you have deployed agents to is this: Create one or more protection groups. A protection group is a collection of one or more resources for which you have a common protection goal.

For example, resources within a protection group share a common protection schedule, consistency check schedule and even the same disk allocations. If all your protected resources have the same protection-related requirements, there is no reason why you can’t just place all of the protected resources into one protection group. If some of your protected resources need a greater degree of protection than others, you’ll need to create multiple protection groups and place resources into those protection groups according to your protection requirements.


Before you can define your Data Protection Manager server’s first protection group, you must have added at least one hard disk to the storage pool and deployed at least one agent. (Those tasks are fully explained in the previous article.)

To create a protection group, select the DPM Administrator Console’s Protection tab. Then click the Create link found in the Actions pane. When you do, Windows will launch the Create New Protection Group Wizard. Click Next to bypass the wizard’s Welcome screen. You’ll be prompted to enter a name for the protection group you are creating.

You can call the new protection group anything you like, as long as the name has not already been assigned to another protection group. I recommend using a descriptive name to help you to remember the protection group’s purpose.

Click Next and you’ll be prompted to select the resources you want to include within the protection group. Fig. 1 shows that you can select either Shares or Volumes and Folders. You cannot mix the two in a protection group. A protection group must include either all share resources or all volume and folder resources. However, you can create one protection group for shares and another protection group for volumes and folders.

You must select the resources to include within the protection group.

Note: You are not obligated to protect all of a server’s resources. You are free to pick and choose which resources will be added to the protection group.

After selecting the resources that you want the protection group to protect, click the Next button. You’ll see a screen similar to the one shown in Fig. 2. This screen displays the cumulative size of the resources you’re including in the protection group. Microsoft System Center Data Protection Manager will automatically calculate the amount of space that should be allocated to the protection group (this calculation takes a few minutes). You are free to change the space allocation, but Microsoft recommends that you follow DPM’s recommendations.

 

DPM automatically calculates how much disk space should be allocated to the protection group.

Click Next and you’ll be prompted to select the initial replication method for the resources included in the protection group. Microsoft System Center Data Protection Manager is configured to begin replicating data from the protected servers immediately upon completion of the wizard. But you have the option of scheduling a time to begin the replication cycle. DPM also gives you the option of transferring the protected files to the DPM server manually. Manually transferring the protected files to the DPM server is a pain, but it may be worth it if the protected server resides across a WAN link and contains more than a couple of gigabytes of data.

Click Next and you will be asked to create a protection schedule for the protection group. The Create a New Protection Group Wizard requires you to create a synchronization schedule and a shadow copy schedule. What’s the difference between synchronization and a shadow copy? In DPM terms, synchronization refers to looking at the protected data, and copying any changes to the DPM server. The thing to understand about synchronization is that only the bytes that have changed within a file are actually copied. This conserves network bandwidth and disk space since, the entire file isn’t being copied each time that it changes. For example, if you have a 100 MB file and 1 KB of data within the file changes, only 1 KB of data is sent to the DPM server rather than copying the entire 100 MB file.

Because Microsoft System Center Data Protection Manager only copies changed bytes rather than entire files when data changes, it occasionally performs a consistency check. A consistency check is a block-by-block comparison of the data on the protected resource, and it’s backup copy on the DPM server. Because consistency checks chew up time and resources, they are normally performed only when a replica is created for a protection group or when a new data source is added to an existing protection group. It is also possible to schedule a daily consistency check through the Advanced Options section of the Create New Protection Group Wizard’s Protection schedule screen.

Shadow copies are a different animal. DPM is designed to retain multiple versions of each file as the file changes. Each shadow copy is basically a file version.

For example, the default protection schedule consists of hourly synchronizations, and perofrms shadow copies three times a day.So if you had to recover a protected file, you would never lose more than an hour’s worth of changes to that file. However, Microsoft System Center Data Protection Manager doesn’t flag the file as a new version every time you modify the file. New versions are linked to shadow copies. Therefore, with the default schedule, there are three different versions of the file retained each day (assuming that the file is modified enough to warrant the new versions). Depending on your server’s resources, Microsoft System Center Data Protection Manager can retain up to 64 versions of each protected file.

Now that you have defined the protection schedule, click Next and you’ll see a summary of the protection group’s configuration. If everything looks good, click the Create Group button and the initial replica will be created (unless you have chosen a delayed or manual replication).

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

How to configure Windows Server Update Services (WSUS) to use BranchCache

Posted by Alin D on September 24, 2010

What is branchCache? BranchCache™ is a new feature in Windows® 7 and Windows Server® 2008 R2 that can reduce wide area network (WAN) or bandwidth utilization and enhance network application responsiveness when users access content in a central office from branch office locations. When you enable BranchCache, a copy of the content that is retrieved from the Web server or file server is cached within the branch office. If another client in the branch requests the same content, the client can download it directly from the local branch network without needing to retrieve the content by using the Wide Area Network (WAN).

How Branchcache works? When a Windows 7 Client from a branch office request data such as WSUS content to a head office Server then server check authentication and authorise data to pass on to the client. This is an ordinary communication happens without branchcache also.

But with branchcache, The client uses the hashes in the metadata to search for the file in the Hosted Cache server. Because this is the first time any client has retrieved the file, it is not already cached on the local network. Therefore, the client retrieves the file directly from the content server. The Hosted Cache server connects to the client and retrieves the set of blocks that it does not have cached.

When a second Windows 7 client from the same branch requests the same WSUS content from the content server or WSUS server. The content server authorizes the user/client and returns content identifiers. The second client uses these identifiers to request the data from the Hosted Cache server residing in branch. This time, it does not retrieve data from the DFS share residing in head office.

To configure a Web server or an application server that uses the Background Intelligent Transfer Service (BITS) protocol, you must install the BranchCache feature using server manager. To configure a file server to use BranchCache, you must install the BranchCache for Network Files feature and configure the server using Group Policy. This article discuss and show how to configure WSUS to use  branchcache. The followings are the steps involve in head office and Branch Offices.

Head Office:

  1. Install and configure back end SQL Server
  2. Create DFS share
  3. Install and configure front end WSUS Server
  4. Configure GPO for WSUS client

Branch Office:

  1. Install and configure Branchcache File Server
  2. Configure GPO for Branchcache
  3. Install and configure front end WSUS server
  4. Configure GPO for WSUS client

Installing BranchCache File Server

1. Click Start, point to Administrative Tools, and then click Server Manager.

2. Right-click Roles and then click Add Roles.

3. In the Add Features Wizard, select File Server and BranchCache for network files and then click Next.

4. In the Confirm Installation Selections dialog box, click Install.

5. In the Installation Results dialog box, confirm that BranchCache installed successfully, and then click Close.

Using Group Policy to configure BranchCache

1. Open the Group Policy Management Console. Click Start, point to Administrative Tools, and then click Group Policy Management Console.

2. Select the domain in which you will apply the Group Policy object, or select Local Computer Policy.

3. Select New from the Action menu to create a new Group Policy object (GPO).

4. Choose a name for the new GPO and click OK.

5. Right-click the GPO just created and choose Edit.

6. Click Computer Configuration, point to Policies, Administrative Templates, Network, and then click Lanman Server.

7. Double-click Hash Publication for BranchCache.

8. Click Enabled.

9. Under Options, choose one of the following Hash publication actions:

a. Allow hash publication for all file shares.

b. Allow hash publication for file shares tagged with “BranchCache support.”

c. Disallow hash publication on all file shares.

10. Click OK.

Using the Registry Editor to configure disk use for stored identifiers

1. Open an elevated command prompt (click Start, click All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator).

2. At the command prompt, type Regedit.exe, and then press Enter.

3. Navigate to HKLMCurrentControlSetServiceLanmanServerParameters.

4. Right-click the HashStorageLimitPercent value, and then click Modify.

5. In the Value box, type the percentage of disk space that you would like BranchCache to use. Click OK.

6. Close the Registry Editor.

Setting the BranchCache support tag on a file share

1. Click Start, point to Administrative Tools, and then click Share and Storage Management.

2. Right-click a share and then click Properties.

3. Click Advanced.

4. On the Caching tab, select Only the files and programs that users specify are available offline.

5. Select Enable BranchCache, and then click OK.

6. Click OK, and then close the Share and Storage Management Console.

To replicate cryptographic data

1. Open an elevated command prompt (click Start, click All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator).2. At the command prompt, type netsh branchcache set key passphrase=“MY_PASSPHRASE”, and then press Enter. Choose a phrase known only to you. Repeat this process using the same phrase on all computers that are participating in the cluster.

Client configuration using Group Policy

1. Click Start, point to Administrative Tools, and click Group Policy Management Console.

2. In the console tree, select the domain in which you will apply the GPO.

3. Create a new GPO by selecting New from the Action menu.

4. Choose a name for the new GPO, and then click OK.

5. Right click the GPO you created and choose Edit.

6. Click Computer Configuration, point to Policies, Administrative Templates: Policy definitions (ADMX files) retrieved from the local machine, Network, and then click BranchCache.

7. Double-click Turn on BranchCache.

8. Click Enabled, and then click OK.

9. To use Distributed Cache mode, double-click Turn on BranchCache – Distributed Caching mode, click Enabled, and then click OK.  or

To use Hosted Cache mode, double-click Turn on BranchCache – Hosted cache mode, click Enabled, and then click OK.

10. To enable BranchCache for SMB traffic, double-click BranchCache for network files, click Enabled, select a latency value under Options, and then click OK.

Configuring a Branch WSUS server to use BranchCache

In addition to enabling BranchCache in your environment, the WSUS server must be configured to store update files locally (both the update metadata and the update files are downloaded and stored locally on the WSUS server). This ensures that the clients get the update files from the WSUS server rather than directly from Microsoft Update.

Install SQL Server 2005/2008 with Management Studio Express on the back-end computer

  1. Click Start, point at All Programs, point at SQL Server 2005
    >, point at Configuration Tools, and select SQL Server Surface Area Configuration.
  2. Choose Surface Configuration for Services and Connections.
  3. In the left window, click the Remote Connections node.
  4. Select Local and remote connections and then select Using TCP/IP only.
  5. Click OK to save the settings.

To ensure administrative permissions on SQL Server

  1. Start SQL Server Management Studio (click Start, click Run, and then type sqlwb).
  2. Connect to the SQL Engine on the server where SQL Server 2005 was installed in Step 1.
  3. Select the Security node and then select Logins.
  4. The right pane will show a list of the accounts that have database access. Check that the person who is going to install WSUS 3.0 on the front-end computer has an account in this list.
  5. If the account does not exist, then right-click the Logins node, select New Login, and add the account.
  6. Set up this account for the roles needed to set up the WSUS 3.0 database. The roles are either dbcreator plus diskadmin, or sysadmin. Accounts belonging to the local Administrators group have the sysadmin role by default.

Install Branch WSUS Server

To install WSUS on the front-end computer At the command prompt, navigate to the folder containing the WSUS Setup program, and type:

WSUSSetup.exe /q FRONTEND_SETUP=1 SQLINSTANCE_NAME=serverinstance CREATE_DATABASE=0

Here, Serverinstance is the name of the remote SQL server that is holding the instance of WSUS database. If you do not want silent installation then don’t use /q switch and follow WSUS installation link

Important! Microsoft recommend 1GB free space for Systems Partition and 30GB for WSUS contents. But this minimum recommended space will create havoc when WSUS log, database log and content grow over the years. So, I used 50GB as systems partition and 100GB as WSUS contents in DFS share.

To configure the proxy server on WSUS front-end servers

  1. In the WSUS administration console, select Options, then Update Source and Proxy Server.
  2. Select the Proxy Server tab, then enter the proxy server name, port, user name, domain, and password, then click OK.
  3. Repeat this procedure on all the front-end WSUS servers.

To specify where updates are stored

  1. In the left pane of the WSUS Administration console, click Options.
  2. In Update Files and Languages, click the Update Files tab.
  3. If you want to store updates in WSUS, select the Store update files locally on this server check box.

To specify whether updates are downloaded during synchronization or when the update is approved

  1. In the left pane of the WSUS Administration console, click Options.
  2. In Update Files and Languages, click the Update Files tab.
  3. If you want to download only metadata about the updates during synchronization, select the Download updates to this server only when updates are approved check box.

To specify language options

  1. In the left pane of the WSUS Administration console, click Options.
  2. In Update Files and Languages, click the Update Languages tab.
  3. In the Advanced Synchronization Options dialog box, under Languages, select one of the following language options, and then click OK.
  4. Select Download updates only in these languages: This means that only updates targeted to the languages you select will be downloaded during synchronization.

How to configure automatic updates by using Group Policy

Log on to Domain Controller using Administrative Privilege. Open GPO management Console>Select Organisational unit>Right client>create and link a new GPO> Name it as WSUS policy>right click>Edit. Go to Computer ConfigurationAdministrative TemplatesWindows ComponentsWindows Updates

Now Specify Client target group, Intranet update server location i.e. http://servername:8530 , update schedule, installation schedule.

To set up a DFS share

Note:This DFS share will be used by all front end WSUS servers.

  1. Go to Start, point at All Programs, point at Administrative Tools, and click Distributed File System.
  2. You will see the Distributed File System management console. Right-click the Distributed File System node in the left pane and click New Root in the shortcut menu.
  3. You will see the New Root Wizard. Click Next.
  4. In the Root Type screen, select Stand-alone root as the type of root, and click Next.
  5. In the Host Server screen, type the name of the host server for the DFS root or search for it with Browse, and then click Next.
  6. In the Root Name screen, type the name of the DFS root, and then click Next.
  7. In the Root Share screen, select the folder that will serve as the share, or create a new one. Click Next.
  8. In the last screen of the wizard, review your selections before clicking Finish.
  9. You will see an error message if the Distributed File System service has not yet been started on the server. You can start it at this time.
  10. Make sure that the domain account of each of the front-end WSUS servers has change permissions on the root folder of this share.

Important! If you are using a DFS share, be careful when uninstalling WSUS from one but not all of the front-end servers. If you allow the WSUS content directory to be deleted, this will affect all the WSUS front-end servers.

To configure IIS for remote access on the front-end WSUS servers

  1. On each of the servers,
    go to Start, point at All Programs, point at Administrative Tools, and click Internet Information Services (IIS) Manager.
  2. You will see the Internet Information Services (IIS) Manager management console.
  3. Click the server node, then the Web Sites node, then the node for the WSUS Web site (either Default Web Site or WSUS Administration).
  4. Right-click the Content node and select Properties.
  5. In the Content Properties dialog box, click the Virtual Directory tab. In the top frame you will see The content for this resource should come from:
  6. Select A share located on another computer and fill in the UNC name of the share.
  7. Click Connect As, and enter the user name and password that can be used to access that share.
  8. Be sure to follow these steps for each of the front-end WSUS servers that are not on the same machine as the DFS share.

To move the content directories on the front-end WSUS servers

  1. Open a command window.
  2. Go to the WSUS tools directory on the WSUS server:cd Program FilesUpdate ServicesTools
  3. Type the following command:wsusutil movecontent DFSsharename logfilenamewhere DFSsharename is the name of the DFS share to which the content should be moved, and logfilename is the name of the log file.

To configure Network Load Balancing

1. Enable Network load balancing

  • a) Click Start, then Control Panel, Network Connections, Local Area Connection, and click Properties.
  • b) Under This connection uses the following items, you may see an entry for Network Load Balancing. If you do not, click Install, then (on the Select Network Component Type screen) select Service, then click Add, then (on the Select Network Service screen) select Network Load Balancing, then OK.
  • c) On the Local Area Connection Properties screen, select Network Load Balancing, and then click OK.

2. On the Local Area Connection Properties screen, select Network Load Balancing, and then click Properties.

3. On the Cluster Parameters tab, fill in the relevant information (the virtual IP address to be shared among the front end computers, and the subnet mask). Under Cluster operation mode, select Unicast.

4. On the Host Parameters tab, make sure that the unique host identifier is different for each member of the cluster.

5. On the Port Rules tab, make sure that there is a port rule specifying single affinity (the default). (Affinity is the term used to define how client requests are to be directed. Single affinity means that requests from the same client will always be directed to the same cluster host.)

6. Click OK, and return to the Local Area Connection Properties screen.

7. Select Internet Protocol (TCP/IP) and click Properties, and then click Advanced.

8. On the IP Settings tab, under IP addresses, add the virtual IP of the cluster (so that there will be two IP addresses). This should be done on each cluster member.

9. On the DNS tab, clear the Register this connection’s addresses in DNS checkbox. Make sure that there is no DNS entry for the IP address.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Easy 10 tips for effective Active Directory design

Posted by Alin D on September 23, 2010

Active Directory design is a science, and it’s far too complex to cover all the nuances within the confines of one article. But I wanted to share with you 10 quick tips that will help make your AD design more efficient and easier to troubleshoot and manage.

1: Keep it simple

The first bit of advice is to keep things as simple as you can. Active Directory is designed to be flexible, and if offers numerous types of objects and components. But just because you can use something doesn’t mean you should. Keeping your Active Directory as simple as possible will help improve overall efficiency, and it will make the troubleshooting process easier whenever problems arise.

2: Use the appropriate site topology

Although there is definitely something to be said for simplicity, you shouldn’t shy away from creating more complex structures when it is appropriate. Larger networks will almost always require multiple Active Directory sites. The site topology should mirror your network topology. Portions of the network that are highly connected should fall within a single site. Site links should mirror WAN connections, with each physical facility that is separated by a WAN link encompassing a separate Active Directory site.

3: Use dedicated domain controllers

I have seen a lot of smaller organizations try to save a few bucks by configuring their domain controllers to pull double duty. For example, an organization might have a domain controller that also acts as a file server or as a mail server. Whenever possible, your domain controllers should run on dedicated servers (physical or virtual). Adding additional roles to a domain controller can affect the server’s performance, reduce security, and complicate the process of backing up or restoring the server.

4: Have at least two DNS servers

Another way that smaller organizations sometimes try to economize is by having only a single DNS server. The problem with this is that Active Directory is totally dependent upon the DNS services. If you have a single DNS server, and that DNS server fails, Active Directory will cease to function.

5: Avoid putting all your eggs in one basket (virtualization)

One of the main reasons organizations use multiple domain controllers is to provide a degree of fault tolerance in case one of the domain controllers fails. However, this redundancy is often circumvented by server virtualization. I often see organizations place all their virtualized domain controllers onto a single virtualization host server. So if that host server fails, all the domain controllers will go down with it. There is nothing wrong with virtualizing your domain controllers, but you should scatter the domain controllers across multiple host servers.

6: Don’t neglect the FSMO roles (backups)

Although Windows 2000 and every subsequent version of Windows Server have supported the multimaster domain controller model, some domain controllers are more important than others. Domain controllers that are hosting Flexible Single Master Operations (FSMO) roles are critical to Active Directory health. Active Directory is designed so that if a domain controller that is hosting FSMO roles fails, AD can continue to function — for a while. Eventually though, a FSMO domain controller failure can be very disruptive.
I have heard some IT pros say that you don’t have to back up every domain controller on the network because of the way Active Directory information is replicated between domain controllers. While there is some degree of truth in that statement, backing up FSMO role holders is critical.
I once had to assist with the recovery effort for an organization in which a domain controller had failed. Unfortunately, this domain controller held all of the FSMO roles and acted as the organization’s only global catalog server and as the only DNS server. To make matters worse, there was no backup of the domain controller. We ended up having to rebuild Active Directory from scratch. This is an extreme example, but it shows how important domain controller backups can be.

7: Plan your domain structure and stick to it

Most organizations start out with a carefully orchestrated Active Directory architecture. As time goes on, however, Active Directory can evolve in a rather haphazard manner. To avoid this, I recommend planning in advance for eventual Active Directory growth. You may not be able to predict exactly how Active Directory will grow, but you can at least put some governance in place to dictate the structure that will be used when it does.

8: Have a management plan in place before you start setting up servers

Just as you need to plan your Active Directory structure up front, you also need to have a good management plan in place. Who will administrator Active Directory? Will one person or team take care of the entire thing or will management responsibilities be divided according to domain or organizational unit? These types of management decisions must be made before you actually begin setting up domain controllers.

9: Try to avoid making major logistical changes

Active Directory is designed to be extremely flexible, and it is possible to perform a major restructuring of it without downtime or data loss. Even so, I would recommend that you avoid restructuring your Active Directory if possible. I have seen more than one situation in which the restructuring process resulted in some Active Directory objects being corrupted, especially when moving objects between domain controllers running differing versions of Windows Server.

10: Place at least one global catalog server in each site

Finally, if you are operating an Active Directory consisting of multiple sites, make sure that each one has its own global catalog server. Otherwise, Active Directory clients will have to traverse WAN links to look up information from a global catalog.

Posted in Windows 2003, Windows 2008 | Tagged: , , , , , , , , , , | Leave a Comment »

10 Core Concepts that Every Windows Network Admin Must Know

Posted by Alin D on September 13, 2010

Introduction

I thought that this article might be helpful for Windows Network Admins out there who need some “brush-up tips” as well as those who are interviewing for network admins jobs to come up with a list of 10 networking concepts that every network admin should know.

So, here is my list of 10 core networking concepts that every Windows Network Admin (or those interviewing for a job as one) must know:

1.     DNS Lookup

The domain naming system (DNS) is a cornerstone of every network infrastructure. DNS maps IP addresses to names and names to IP addresses (forward and reverse respectively). Thus, when you go to a web-page like http://www.windowsnetworking.com, without DNS, that name would not be resolved to an IP address and you would not see the web page. Thus, if DNS is not working “nothing is working” for the end users.

DNS server IP addresses are either manually configured or received via DHCP. If you do an IPCONFIG /ALL in windows, you will see your PC’s DNS server IP addresses.


Figure 1: DNS Servers shown in IPCONFIG output

So, you should know what DNS is, how important it is, and how DNS servers must be configured and/or DNS servers must be working for “almost  anything” to work.

When you perform a ping, you can easily see that the domain name is resolved to an IP (shown in Figure 2).


Figure 2: DNS name resolved to an IP address

For more information on DNS servers, see Brian Posey’s article on DNS Servers.

2.     Ethernet & ARP

Ethernet is the protocol for your local area network (LAN). You have Ethernet network interface cards (NIC) connected to Ethernet cables, running to Ethernet switches which connect everything together. Without a “link light” on the NIC and the switch, nothing is going to work.

MAC addresses (or Physical addresses) are unique strings that identify Ethernet devices. ARP (address resolution protocol) is the protocol that maps Ethernet MAC addresses to IP addresses. When you go to open a web page and get a successful DNS lookup, you know the IP address. Your computer will then perform an ARP request on the network to find out what computer (identified by their Ethernet MAC address, shown in Figure 1 as the Physical address) has that IP address.

3.     IP Addressing and Subnetting

Every computer on a network must have a unique Layer 3 address called an IP address. IP addresses are 4 numbers separated by 3 periods like 1.1.1.1.

Most computers receive their IP address, subnet mask, default gateway, and DNS servers from a DHCP server. Of course, to receive that information, your computer must first have network connectivity (a link light on the NIC and switch) and must be configured for DHCP.

You can see my computer’s IP address in Figure 1 where it says IPv4 Address 10.0.1.107. You can also see that I received it via DHCP where it says DHCP Enabled YES.

Larger blocks of IP addresses are broken down into smaller blocks of IP addresses and this is called IP subnetting. I am not going to go into how to do it and you do not need to know how to do it from memory either (unless you are sitting for a certification exam) because you can use an IP subnet calculator, downloaded from the Internet, for free.

4.     Default Gateway

The default gateway, shown in Figure 3 as 10.0.1.1, is where your computer goes to talk to another computer that is not on your local LAN network. That default gateway is your local router. A default gateway address is not required but if it is not present you would not be able to talk to computers outside your network (unless you are using a proxy server).


Figure 3: Network Connection Details

5.     NAT and Private IP Addressing

Today, almost every local LAN network is using Private IP addressing (based on RFC1918) and then translating those private IPs to public IPs with NAT (network address translation). The private IP addresses always start with 192.168.x.x or 172.16-31.x.x or 10.x.x.x (those are the blocks of private IPs defined in RFC1918).

In Figure 2, you can see that we are using private IP addresses because the IP starts with “10”. It is my integrated router/wireless/firewall/switch device that is performing NAT and translating my private IP to my public Internet IP that my router was assigned from my ISP.

6.     Firewalls

Protecting your network from malicious attackers are firewalls. You have software firewalls on your Windows PC or server and you have hardware firewalls inside your router or dedicated appliances. You can think of firewalls as traffic cops that only allow certain types of traffic in that should be in.

For more information on Firewalls, checkout our Firewall articles.

7.     LAN vs WAN

Your local area network (LAN) is usually contained within your building. It may or may not be just one IP subnet. Your LAN is connected by Ethernet switches and you do not need a router for the LAN to function. So, remember, your LAN is “local”.

Your wide area network (WAN) is a “big network” that your LAN is attached to. The Internet is a humongous global WAN. However, most large companies have their own private WAN. WANs span multiple cities, states, countries, and continents. WANs are connected by routers.

8.     Routers

Routers route traffic between different IP subnets. Router work at Layer 3 of the OSI model. Typically, routers route traffic from the LAN to the WAN but, in larger enterprises or campus environments, routers route traffic between multiple IP subnets on the same large LAN.

On small home networks, you can have an integrated router that also offers firewall, multi-port switch, and wireless access point.

For more information on Routers, see Brian Posey’s Network Basics article on Routers.

9.     Switches

Switches work at layer 2 of the OSI model and connect all the devices on the LAN. Switches switch frames based on the destination MAC address for that frame. Switches come in all sizes from small home integrated router/switch/firewall/wireless devices, all the way to very large Cisco Catalyst 6500 series switches.

10. OSI Model encapsulation

One of the core networking concepts is the OSI Model. This is a theoretical model that defines how the various networking protocols, which work at different layers of the model, work together to accomplish communication across a network (like the Internet).

Unlike most of the other concepts above, the OSI model isn’t something that network admins use every day. The OSI model is for those seeking certifications like the Cisco CCNA or when taking some of the Microsoft networking certification tests. OR, if you have an over-zealous interviewer who really wants to quiz you.

To fulfill those wanting to quiz you, here is the OSI model:

  • Application – layer 7 – any application using the network, examples include FTP and your web browser
  • Presentation – layer 6 – how the data sent is presented, examples include JPG graphics, ASCII, and XML
  • Session – layer 5 – for applications that keep track of sessions, examples are applications that use Remote Procedure Calls (RPC) like SQL and Exchange
  • Transport – layer 4 -provides reliable communication over the network to make sure that your data actually “gets there” with TCP being the most common transport layer protocol
  • Network – layer 3 -takes care of addressing on the network that helps to route the packets with IP being the most common network layer protocol. Routers function at Layer 3.
  • Data Link – layer 2 -transfers frames over the network using protocols like Ethernet and PPP. Switches function at layer 2.
  • Physical – layer 1 -controls the actual electrical signals sent over the network and includes cables, hubs, and actual network links.

At this point, let me stop degrading the value of the OSI model because, even though it is theoretical, it is critical that network admins understand and be able to visualize how every piece of data on the network travels down, then back up this model. And how, at every layer of the OSI model, all the data from the layer above is encapsulated by the layer below with the additional data from that layer. And, in reverse, as the data travels back up the layer, the data is de-encapsulated.

By understanding this model and how the hardware and software fit together to make a network (like the Internet or your local LAN) work, you can much more efficiently troubleshoot any network. For more information on using the OSI model to troubleshoot a network, see my articles Choose a network troubleshooting methodology and How to use the OSI Model to Troubleshoot Networks.

Summary

I can’t stress enough that if you are interviewing for any job in IT, you should be prepared to answer networking questions. Even if you are not interviewing to be a network admin, you never know when they will send a senior network admin to ask you a few quiz questions to test your knowledge. I can tell you first hand, the questions above are going to be the go-to topics for most network admins to ask you about during a job interview. And, if you are already a windows network admin, hopefully this article serves as an excellent overview of the core networking concepts that you should know. While you may not use these every day, knowledge of these concepts is are going to help you troubleshoot networking problems faster.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Active Directory Sites and Services

Posted by Alin D on August 31, 2010

Active Directory Sites and Services (ADSS) is a key element of any Active Directory Domain. It allows you to structure your Active Directory such that you can tightly control the replication and bandwidth used by multiple, segmented areas of your network.  I recently set up my own domain with separate sites to control the interaction between two different branch offices on the network – this is where it really shines.

Parts of the network are segmented by subnets.  For detailed information on a click here: subnet.  The short version is that a subnet is a high bandwidth, low latency, interconnected network of computers.  You might recognize the common home LAN subnet of 255.255.255.0.  This describes 254 address in an IP address range.  It is possible to have more say 192.168.1.1 – 192.168.3.1 would be 255.255.253.0.  Using subnets your domain can be segmented into different sites, allowing you to control your network over both high speed LAN connections and over slower speed $define(WAN) links that interconnect branch offices or other areas important to your domain.

Wow that sounds confusing, okay simple version as an example.  You have a main office building for a company called WinTastic.  This office is located in Paducah, KY.  WinTastic has expanded and now has opened a branch office in Marsville, TN – a prime business location for their product.  Your tasked with the job of adding this new branch office to their existing domain.  How do you do this? Why ADSS of course (Didn’t see that coming did you?).  Your office in Paducah is a lan that consists of IP addresses 192.168.1.1-192.168.1.254.  Knowing in advance you will be setting up a new Active Directory site you configure your shiny new DC in Marsville for DHCP and the IP address range 192.168.2.1-192.168.2.254.  Now with the setup portion done and some fancy spancy VPN linkage between your two networks, its now time to get knee deep in AD and setup your sites.

A bit more background first, the reason sites are important is to control your domain’s replication between different domain controllers.  Each site needs to have a DC on its subnet, with ADSS and the proper subnets, every time a client logs on the DNS servers will provide them with their closest DC (presumably the fastest available to them).  Without this key improvement one of our poor office bees in Paducah could login using the server in Marsville over our cruddy DSL connection.  Now this won’t solve every problem with multiple branches but it goes a long way toward making your domain very quick, clean, and accessible.

Okay onto how to setup a site.  First and foremost you much setup subnets.  Heres an image of my ADSS:

You can see I have two sites (and subsequently two subnets though you can assign more than one subnet to a site).  First we’ll look at how to create sites and set up a new site.  So lets start by right-clicking on our root sites folder and clicking add site.  The wizard that pops up is fairly straight forward except for the site link:

The name is just a descriptive name so that you can remember where this site is (though in its properties you also can set a more descriptive location).  The only interesting thing here is our site link.  You can create a site link or use the default one but this just defines a connection between two or more sites and its cost to use.  This way the AD can determine the lowest cost link to follow to arrive at a AD site (Exchange uses this heavily to deliver mail amongst hub servers).  The wizard for site links is fairly easy to understand just a name and the sites to include.  If you want to set more detailed information on it, create it then right click and open its properties where you can adjust its cost and other settings.  We aren’t going to talk about bridges in depth but on a quick note a bridge head is a hub between two links.  I. E. Marsville,TN <–> Paducah, KY <–> Hopkinton, MA.  Paducah acts as a bridge between Marsville and Hopkinton (though it won’t unless its defined as one otherwise theres no connection between Marsville and Hopkinton – this isn’t a big problem though depending on where your network services are located not always advisable.

Second we’ll look at the subnets you can see I have 192.168.1.0/24 and 192.168.2.0/24.  The first part is the starting IP for the range you are interested in defining aka 192.168.1.0 and 192.168.2.0.  The second part is a subnet bit.  You can calculate your subnet bit at http://www.subnet-calculator.com/.  A subnet bit of 24 stands for the subnet mask 255.255.255.0 which given my combinations above provides two IP ranges (192.168.1.1-192.168.1.254 and 192.168.2.1-192.168.2.254).  NOTE: if you don’t care about what a subnet bit is don’t read the following paragraph just skip it, its really useless because the calculator determines it for you.

For those that didn’t skip, I see you enjoy knowing everything.  A subnet bit describes how many bits of the given IP address (in our case 192.168.1.0) do not change on our network or make it a part of our subnet.  The part that does change is known as the host id and is the part of the IP address that is changed when it is assigned to the computer (aka the .0 part will change when DHCP assigns a new IP).  In our case the 192.168.1 is the subnet id and will stay the same.  So where does 24 come from?  Well and IP address is a 32 bit binary number.  Each octet of the IP is eight binary bits (hence its an octet).  In ours, three octets (24 bits) do not change and this defines are subnet.  It doesn’t have to be 8, 16, or 24.  It can be many options in-between though not every range can be represented due to how the conversion between binary and octets end up.  AKA 255.255.255.0 works 255.255.253.0 doesn’t.  :x However 255.255.254.0 does. Also the last octet can exist too 255.255.255.128 is valid though 255.255.255.5 isn’t.  Short version, use the calculator.

Okay enough of that.  For those of you who could care less of the gory details, way to go.  Now onward.  Using this fun subnet bit we defined our subnets by right clicking on the subnet folder and clicking add a subnet.  This brings up the following wizard:

The prefix is a base IP address and subnet bit i.e. 192.168.1.0/24.  The next thing to select is a site to assign this subnet too.  We just click the site we care about and hit okay.

Now we’ve done the hard part.  Now its just drag and drop to move our DCs between sites so that they are placed in there respective areas.  Once moved your done, its a piece of cake.  If you want to force replication you can drill down to a server’s NTDS settings (click a server on the left and NTDS settings appears at the right) then right click the NTDS settings and click replicate from or to.  Normally the servers replicate every 180 minutes or a custom interval specified in your site link properties.

If there are questions leave them in the comments but hopefully this will help explain a very important part of the Active Directory.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , | Leave a Comment »

Active Directory Domain Administration Tools

Posted by Alin D on August 19, 2010

  • Active Directory Domain and Trusts: Manages trusts, domain and forest functional levels, and user principal name suffixes. It is located in administrative tools from either the control panel or the start menu
  • Active Directory Schema Snap-in: This tool will not appear unless is is enabled with the command “regsvr32.exe  schmmgmt.dll”. Then it is only available by adding it to a custom-built MMC. It allows the modification of the schema for AD DS directories or AD LDS instances. It is best not to change anything here. That’s probably why it’s so difficult to find this tool.
  • Active Directory Sites and Services: Active Directory Domain Controllers automatically update records between themselves, but if a domain is split between two physical locations, it may not be feasible to have the Domain Controllers choose their own replication scheme. This may result in the waste of bandwidth as they replicate across the WAN multiple times in both directions. ADSS allows an administrator to manage replication so that it only crosses the WAN once. The servers that communicate across the WAN are called bridgehead servers, and they replicate to all other Domain Controllers within their site. Active Directory Sites and Services are where you choose all of your replication schemes according to subnet. It can also be specified by server to force a direct replication between two servers in the same site.
  • Active Directory Users and Computers: The tool that every one knows. It manages user, groups, and domain specific FSMO roles. FSMO stands for Flexible Single Master Operation. FSMO deals with the roles that domain controllers fulfill are…
    • RID master: Relative ID master maintains group membership when users or computers are moved between the domains. Also manages security principles. RID is part of the SID (System Identifier). Only one of these exist per domain.
    • Infrastructure master: Maintains GUID (Globally unique IDs) in the domain and maintains groups and users from other domains and their membership in local groups. Only one of these exist per domain.
    • PDC Emulator: Originally, Active Directory domains could only have on domain controller. That primary domain controller updated, deleted, and managed records in the domain. For backwards compatibility, one domain controller will still act as that primary domain controller. Only one of these exist per domain.
  • ADSI Edit: Active Directory Service Interface will modify query, and edit directory objects and attributes. It is a bit obtuse, but some times required. One example is when you need to create a password settings object.
  • Best Practices Analyzer: This is not just one tool, but a whole slew of tools available for download from Microsoft. It is available for lots of applications such as WSUS, DNS, Hyper-V, etc. Clearly, not all of them apply to Active Directory.
  • csvde.exe: A command line tool used to bulk add users to the domain from a csv file. A csv (comma separated value) file may be created in Word, Excel, or Notepad. It may be used to move users from one domain to another and list users in the domain.
  • dcdiag.exe: Diagnoses and creates a report on the status of Active Directory.
  • dcpromo.exe: Command line tool used to create or remove active directory. Can also be used start the GUI version of the installation process.
  • dfsradmin.exe: Used to manage Distributed File System Replication, which is only available in Windows Server 2008 functional level. This checks the replication of the SYSVOL folder, which is where the information for Active Directory is stored. In 2008 forests, DFSR replaced FRS (file replication service) which was the old method for replication.
  • DNS Manager: A GUI console for managing the Domain Name Server and the records that it maintains.
  • dnscmd.exe: Command line utility used to manage DNS and all of its aspects.
  • dsacls.exe: This command line tool can be used to modify the ACL (access control list) on objects in Active Directory. All items in Active Directory will have NTFS permissions. This is just a way to modify them in command line.
  • dsadd.exe: Command used to add users, computers, or groups to an Active Directory domain. May be used in a command or incorporated into a script.
  • dsamain.exe: This command line utility is used to browse backups (.dit) of Active Directory.
  • dsbutil.exe: This command line utility is installed with Active Directory Lightweight Directory Services. It is used to maintain, view, and configure AD LDS ports.
  • dsget.exe: This command is used to retrieve data from Active Directory about an object.
  • dsmgmt.exe: This command line utility manages application partitions and FSMO roles in Active Directory. It will also clean meta data left behind by AD DCs and LDS servers that were removed without being uninstalled.
  • dsmod.exe: This command line utility is used to modify users, computers, and groups in Active Directory.
  • dsmove.exe: This command will move an object to a new location in the same directory. It can also be used to rename an object.
  • dsquery.exe: Command line utility to search for objects in Active Directory using defined characteristics.
  • dsrm.exe: Command line utility used to remove objects from Active Directory.
  • Event Viewer: A tool that has purposes other than DNS. However it does keep a record of changes in Active Directory. If auditing changes in Server 2008, it will log the old and new values for the change.
  • gpfixup.exe: After renaming the domain, some Group Policy objects and Group Policy links may be not working properly. This command line utility repairs them.
  • Group Policy Management Console: This console is used to create, manage, back up, and restore GPOs.
  • ipconfig: While this is typically used in networking, this command line tool may indicate that the reason that users are unable to authenticate to the domain is because their network configuration is not correct.
  • ksetup.exe: Not actually specific to a Windows Server operating system, this command will prepare a client for a Kerberos v5 realm instead of an Active Directory domain.
  • ktpass.exe: This command line utility is used to configure a non-Windows Kerberos service  to be used with an Active Directory domain.
  • ldifde.exe: This command line tool will import entries into AD LDS (Active Directory Lightweight Directory Services).
  • ldp.exe: This tool is invoked from command line and opens in the GUI. It is used to perform LDAP (Lightweight Directory Access Protocol) operations against the directory.
  • movetree.exe: This command line tool which may be downloaded from Microsoft is used to move objects from one domain to another in a forest. It is not available in Windows Server 2008.
  • netdom.exe: This command line tool allows the management of computer and user accounts and trust relationships. This is available on client versions of Windows as well.
  • nltest.exe: This command line tool is used to verify trust relationships or check replication status. This is available on client versions of Windows as well.
  • nslookup.exe: Used in the command line, nslookup.exe is used to diagnose DNS problems and view information on name servers. This is available on client versions of Windows as well.
  • ntdsutil.exe: This command line tool is used to perform maintenance on AD DS/AD LDS.
  • repadmin.exe: This command line tool is used to check replication between domain controllers that use the FRS (File Replication Service). FRS was the replication method of the SYSVOL folder that contains all the information about the Active Directory domain. In a Windows Server 2008 forest, the replacement service is DFSR (Distributed File Replication Service).
  • Server Manager: This GUI tool in Windows Server 2008 is used to manage many aspects of a Windows Server 2008. Active Directory management happens to be a part of it. It is similar to the “Manage Your Server” tool in Server 2003 or Computer Management in other operating systems.
  • System Monitor: A console used to create baseline references (benchmarks) and create charts and graphs of server performance.
  • ultrasound.exe: A console (not available in Windows Server 2008) that is used to troubleshoot replication of FRS. It is invoked via command line and relies on WMI (Windows Management Instrumentation.)
  • w32tm.exe: Kerberos relies heavily on the fact that all systems in the domain have the same time. The command line tool w32tm.exe is used to view, manage, or diagnose problems with Windows Time. This tool is available on many Windows operating systems.
  • Windows Server Backup (wbadmin.exe): Backs up or restores many parts of a windows operating system. Introduced in Server 2008. The older version was called simply called backup (ntbackup.exe). It can be used to back up the whole computer or only certain sections such as DNS, AD, AD LDS

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , | Leave a Comment »