Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘storage area network’

Proactive Active Directory Monitoring

Posted by Alin D on July 28, 2011

Companies go out of their way to ensure proper Active Directory backup procedures, various redundancy solutions and anything else that will help prevent or mitigate a disaster. For the most part, these are mainly reactive solutions.

Many engineers have become so complacent with backup that they’ve forgotten one very important element, which is to keep Active Directory healthy in the first place. When AD becomes corrupt, it can be restored from a snapshot or repaired with Ntdsutil.exe.

Being proactive doesn’t mean that planning for a disaster goes out the window. Key elements to disaster prevention include maintaining good backups and making sure snapshots are done on a storage area network, where available. However, there are certain tips and tricks within AD’s functionality that will help keep the entire environment more stable and healthy.

Protecting AD against “accidental” object deletion

Almost every engineer has made a mistake within Active Directory. Sometimes it’s a simple misspelling of a user’s name and other times it can be a bit more serious. There have been instances where an administrator logs into AD to perform some type of management and then accidentally deletes an entire organizational unit (OU). What if that OU contains 3000 users? Now what?

In many situations, the administrator would then have to restore the AD database or try to find the latest AD snapshot. However, in Windows Server 2008 R2, Microsoft gives IT administrators a great option designed to protect Active Directory objects from being accidentally deleted. This option is available for all objects that are manageable through Active Directory Users and Computers, and is enabled by default when you create a new OU. By selecting the “Protect container from accidental deletion” option, an access control entry is added to the access control list on the object.

Note: By default, the accidental deletion protection is enabled by default only for OUs, and notfor user objects. This means that if you attempt to delete one or more user objects, even if you’re located inside a protected OU, you will succeed.

With that mentioned, to protect user, group, or computer objects from accidental deletion, you must manually enable this option in the object’s properties. Change the view in ADUC so that it shows the advanced features, open the object’s Properties window, and click on the Object tab. There you can select the accidental deletion protection option.

Managing AD size by performing off-line defragmentation

There are preset AD functions that work in the background to keep the environment healthy. For example, the online maintenance cycle keeps the database in check regularly and without administrator interaction. However, although the data within the database is regularly defragmented, the database itself has a tendency to increase in size over time.

This is especially true if administrators periodically purge database records. For example, it’s quite possible to have a 4 GB Active Directory database that contains less than 1 GB of data, and over 3 GB of empty space. This space can be reclaimed by performing an off-line defragmentation.

In Windows Server 2008, the Active Directory is a service. Any time that you want to perform maintenance on the Active Directory database, you can take it off-line by simply stopping the Active Directory Domain Service.

It’s always a good idea to begin the process by performing a full system state backup. Once a successful backup is verified, open Windows Explorer and navigate to theC:WindowsNTDS folder. The Active Directory database is stored in the NTDS.DIT file. You should make note of the size of this file so that you can go back later on and figure out how much space you have reclaimed.

At this point, you should open the Service Control Manager, and stop the Active Directory Domain Services service. After that’s complete, you will see a message telling you that a number of dependency services also need to be stopped. Click “Yes” to stop these additional services.

Once all of the necessary services have been stopped, open Command Prompt on the server, and enter the following commands:

NTDSUTIL

Activate Instance NTDS

Files

Info

At this point, you should see a summary of the files that are used by the Active Directory database. You can now begin the defragmentation process by entering the following command:

Compact to c:windowsntdsdefragged

Keep in mind that depending on the size of your database, this process can take quite a while to complete, and the domain controller that you are defragmenting is unavailable until the Active Directory Domain Services and all of the dependency services are brought back online.

When the process completes, go to the C:WindowsNTDS folder and rename the NTDS.DIT file to NTDS.OLD. You can delete this file later on, but hang onto it for right now just in case anything goes wrong with the defragmented copy of the database. Now, copy the defragmented database from C:WindowsNTDSDefragged to C:WindowsNTDS.

Finally, restart the Active Directory Domain Services (the dependency services will restart automatically). Now, you can reference back to see the reduction in space.

Proactive Tips and Best Practices
There are many ways to keep your AD environment humming. Given its critical nature, every avenue should be taken to make sure Active Directory does not go down. Below is a brief list of some ways to be proactive when it comes to AD stability, security, and health:

  • Rename or disable the Administrator account (and guest account) in each domain to prevent attacks on your domains.
  • Manage the security relationship between two forests and simplify security administration and authentication across forests.
  • Place at least one domain controller in every site, and make at least one domain controller in each site a global catalog.
    • Sites that do not have their own domain controllers and at least one global catalog are dependent on other sites for directory information and are less efficient.
  • Use global groups or universal groups instead of domain local groups when specifying permissions on domain directory objects replicated to the global catalog.
  • Always have current backups and verify their consistency.
  • To provide additional protection for the Active Directory schema, remove all users from the Schema Admins group, and add a user to the group only when schema changes need to be made. Once the change has been made remove the user from the group.
  • Always monitor AD health by ensuring proper permissions, good OU management, and performing preventative maintenances.

 

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Windows 2008 Hyper-V Storage Components Configuration

Posted by Alin D on December 15, 2010

Introduction

Windows Server 2008 supports several different types of storage. You can either connectto storage physically, or by using a virtual hard drive (VHD).When Hyper-V is installed on a host, it can access the many different storage optionsthat are available to it, including direct attached storage (DAS, such as SATA or SAS) orSAN storage (FC, or iSCSI). Once you connect the storage solution to the parent partition,you can make it available to the child partition in a number of ways.

Hyper-V Storage Options

Windows Server 2008 with Hyper-V supports the use of direct attached storage, NAS, iSCSI, and Fibre Channel storage.

VHD or Pass-through Disk

A virtual hard drive (VHD) can be created on the parent partition’s volume with access granted to the child partition. The VHD operates as a set of blocks, stored as a regular file using the host OS file system (which is NTFS).

Within Hyper-V there are different types of VHDs, including fixed size, dynamically expanding, and “differencing” disks:

  • Dynamically expanding This type of virtual hard drive starts small but automatically expands to react to need. It will expand up to the maximum size indicated when the virtual hard drive is created. “Dynamic,” in this context, is sort of a misnomer. Dynamic implies that the size of the virtual drive changes up and down based on need. But in actuality, the hard drive will keep expanding until it reaches the maximum limit. If you remove content from the virtual hard drive, it will not shrink to meet the new, smaller capacity.

In Hyper-V, you can expose a host disk to the guest without putting a volume on it by using a pass-through disk. Hyper-V allows you to bypass the host’s file system and access the disk directly. This disk is not limited to 2,040 GB and can be a physical hard drive on the host or a logical one on a SAN.

Hyper-V ensures that the host and guest are not trying to use the disk at the same time by setting the drive to be in the offline state for the host.  Pass-through disks have their downsides. You lose some VHD-related features, like VHD snapshots and dynamically expanding VHDs.

IDE or SCSI on the Guest

Configuring the child partition’s virtual machine settings requires you to choose how the disk will be shown to the guest (either as a VHD file or pass-through disk). The child partition can see the disk as either a virtual ATA device or as a virtual SCSI disk. But you do not have to expose the drive to the child partition the same way as you exposed it to the parent partition. For example, a VHD file on a physical IDE disk on the parent partition can be shown as a virtual SCSI on the guest.

What you need to decide is what capabilities you want on the guest. You can have up to four virtual  IDE drives on the guest, but they are the only type the virtualized BIOS will boot from. You can have up to 256 virtual SCSI drives on the child partition, but you cannot boot from them.

You can also expose drives directly to the child partition by using iSCSI. This bypasses the parent partition completely. All you have to do is load an iSCSI initiator in the child partition and configure your partition accordingly.

Hyper-V does not support booting to iSCSI, so you still need another boot drive.

Fibre Channel

A Fibre Channel Storage Area Network (SAN) is the most widely deployed storage solution in enterprises today. SANs came into popularity in the mid- to late 90s and have had huge growth numbers with the boom of data that customers are keeping and using every day. Ultimately, a SAN is really just a network that is made up of storage components, a method of transport, and an interface. A SAN is made up of a disk array, a switch, and a host bus adapter. Most hardware providers such as HP, Dell, and SUN have a SAN solution and there are companies such as EMC, Hitachi, NetApp, and Compellent that focus primarily on SAN and storage technology.

Hyper-V Storage

Hyper-V Storage

Fibre Channel Protocol

The Fibre Channel Protocol is the mechanism used to transmit data across Fibre Channel networks. There are three main fabric topologies that are used in Fibre Channel: pointto-point, arbitrated loop, and fabric.

Point-to-Point

Point-to-point topology is a direct connection between two ports, with at least one of the ports acting as a server. This topology needs no  arbitration for the storage media because of separate links for transmission and reception. The downside is that it is limited to two nodes and is not scalable.

Arbitrated Loop

Arbitrated loop topology combines the advantages of the fabric topology (support for multiple devices) with the ease of operation of point-to-point topology. In the arbitrated loop topology, devices are connected to a central hub, like Ethernet LAN hubs. The Fibre Channel hub arbitrates (or shares) access to devices, but adds no additional functionality beyond acting as a centralized connection point.

Within the arbitrated loop category, there are two types of topologies:

  • Arbitrated loop hub Devices must seize control of the loop and then establish a point-to-point connection with the receiving device. When the transmission has ended, devices connected to the hub begin to arbitrate again. In this topology, there can be 126 nodes connected to a single link.
  • Arbitrated loop daisy-chain Devices are connected in series and the transmit port of one device is connected to the receive port on the next device in the daisy chain. This topology is ideal for small networks but is not very scalable, largely because all devices on the daisy chain must be on. And if one device fails, the entire network goes down.

Fabric

The fabric topology is composed of one or more Fibre Channel switches connected through one or more ports. Each switch typically contains 6, 16, 32, or 64 ports.

Disk Array

The SAN disk array is a grouping of hard disks that are in some form of RAID configuration and, just as with any RAID configuration, the more disks you have, the more I/O you will receive.

In most modern disk arrays, the disk controller (the component that controls RAID configuration and access to the disk array) will have some form of cache built into it. The cache on the disk controller is used to increase performance of the disk subsystem for both reads and writes. In the case of virtualization, the more cache available, the better performance you will see from your virtual machines.

Fibre Channel Switch

A Fibre Channel switch is a networking switch that uses the Fibre Channel protocol and is the backbone of the storage area network fabric. These switches can be implemented as one switch or many switches (for redundancy and scalability) to provide many-tomany communications between nodes on the SAN fabric. The Fibre Channel switch uses zoning to segregate traffic between storage devices and endpoint nodes. This zone can

be used to allow or deny a system access to a storage device. In the past few years three companies have really cornered the market on Fibre Channel switches: Cisco Systems, QLogic, and Brocade. When purchasing a switch, pay close attention to the back plane bandwidth that it has as well as how fast the ports are. You can get a Fibre Channel switch that supports port speeds of 2, 4, or 8 gigabits per second.

Tiered Storage

Using a technique called tiered storage, you assign different categories of your data to different types of storage media, in order to reduce total storage cost. Categories may be based on the levels of protection needed, performance issues, frequency of access, or whatever other considerations you have.

Because assigning data to a particular form of media is an ongoing and complex activity, some vendors provide software that automatically manages the process based on your organization’s policies.

For example, at tier 1, mission-critical or frequently accessed files are stored on highcapacity, fast-spinning hard drives. They might also have double-level RAIDs on them.

At tier 2, less important data is stored on less expensive, slower-spinning drives in a conventional SAN. As the tiers progress, the media gets slower and less expensive. As such, tier 3 of a three-tier system might contain rarely used or archived material. The lowest level of the tiered system might be simply putting the data onto DVD-ROMs.

SAN Features

Today SANs come with features that can really help your enterprise manage your data and your virtual machines.

iSCSI

iSCSI is a type of SAN that uses industry technologies such as Ethernet and Ethernet NICs for transport and an interface. So far iSCSI has proven to be a great lower-cost solution to the Fibre Channel SAN solutions that are available. If, for instance, you are building your virtualization environment on servers that are worth U.S. $4,000 and you want to connect them to your Fibre Channel SAN, you will have to purchase Fibre Channel adapter cards that are compatible with your SAN. Each one of the Fibre Channel cards you implement into your server can range from $1,000 to $2,000 each. That can really start to be an additional cost if you are building out one or more 16-node clusters.

Now we’re sure some of you reading this are saying, “Well yeah, but I’ve already implemented my Fibre Channel architecture. Now you want me to implement another infrastructure just for this?”

Well, no, that’s not the case. What we’re saying is that if you don’t already have an infrastructure in place, iSCSI would be a great technology for you to research. One other consideration you will want to keep in mind when you are determining whether or not to implement iSCSI is whether you want to implement iSCSI on separate physical switches than your production network. You can do this, but we don’t recommend it. The reason we don’t recommend this is that an iSCSI network can become saturated extremely quickly and use all of the bandwidth on your back plane, creating network contention for your production services.

Direct Attached Storage

Direct attached storage (DAS) is a storage array that is directly connected, rather than connected via a storage network. The DAS is generally used for one or more enclosures that hold multiple disks in a RAID configuration.

DAS, as the name suggests, is directly connected to a machine, and is not directly accessible to other devices. For an individual computer user, the hard drive is the most common form of DAS. In an enterprise, providing storage that can be shared by multiple computers tends to be more efficient and easier to manage.

The main protocols used in DAS are ATA, SATA, SCSI, SAS, and Fibre Channel. A typical DAS system is made of one or more enclosures holding storage devices such as hard disk drives, and one or more controllers. The interface with the server or the workstation is made through a host bus adapter.

NAS

Network attached storage (NAS) is a hardware device that contains multiple disks in a RAID configuration that purely provides a file system for data storage and tools to manage that data and access. An NAS device is very similar to a traditional file server, but the operating system has been stripped down and optimized for file serving to a heterogeneous environment. To serve files to both UNIX/Linux and Microsoft Windows, most NAS devices support NFS and the SMB/CIFS protocols.

NAS is set up with its own network address. By removing storage access and its management from the server, both application programming and files can be served faster, because they are not competing for the same processor resources. NAS is connected to the LAN and assigned an IP address. File requests are mapped by the main server to the NAS server.

NAS can be a step toward and included as part of a more sophisticated SAN. NAS software can usually handle a number of network protocols, including Microsoft’s Internetwork Packet Exchange and NetBEUI and Novell’s Netware configuration.

Posted in Windows 2008 | Tagged: , , , , , , , , , , | Leave a Comment »

Common Storage Configurations

Posted by Alin D on September 20, 2010

Introduction

In today’s world everything is on computers. More specifically, everything is stored on storage devices which are attached to computers in a number of configurations. There are many ways in which these devices can be accessed by users. Some are better than others and some are best for certain situations; in this article I will give an overview of some of these ways and describe some situations where one might want to implement them.

Firstly there is an architecture called Directly Attached Storage (DAS). This is what most people would think of when they think of storage devices. This type of architecture includes things like internal hard drives, external hard drives, and USB keys. Basically DAS refers to anything that attaches directly to a computer (or a server) without any network component (like a network switch) between them.


Figure 1: Three configurations for Direct Attached Storage solutions (Courtesy of ZDNetasia.com)

A DAS device can even accommodate multiple users concurrently accessing data. All that is required is that the device have multiple connection ports and the ability to support concurrent users. DAS configurations can also be used in large networks when they are attached to a server which allows multiple users to access the DAS devices. The only thing that DAS excludes is the presence of a network device between the storage device and the computer.

Many home users or small businesses require Network Attached Storage (NAS). NAS devices offer the convenience of centrally locating your storage devices, though not necessarily located with your computers. This feature is convenient for home users who may want to store their storage devices in their basement while roaming about their house with their laptop. This feature is equally appealing to small businesses where it may not be appropriate to have large storage devices where clients or customers present. DAS configurations could also provide this feature, though not as easily or elegantly for smaller implementations.


Figure 2: Diagram of a Network Attached Storage system (Courtesy of windowsnas.com)

A NAS device is basically a stripped down computer. Though they don’t have monitors or keyboards they do have stripped down operating systems which you can configure, usually by connecting to the device via a web browser from a networked computer. NAS operating systems are typically stripped down versions of UNIX operating systems, such as the open source FreeNAS which is a stripped down version of FreeBSD. FreeNAS supports many file formats such as CIFS, FTP, NFS, TFTP, AFP, RSYNC, and iSCSI. Since FreeNAS is open source you’re also free to add your own implementation of any protocol you wish. In a future article I will provide more in-depth information on these protocols; so stay tuned.

Because NAS devices handle the file system functions themselves, they do not need a server to handle these functions for them. Networks that employ DAS devices attached to a server will require the server to handle the file system functions. This is another advantage of NAS over DAS. NAS “frees up” the server to do other important processing tasks because a NAS device is connected directly to the network and handles all of the file serving itself. This also means that a NAS device can be simpler to configure and maintain for smaller implementations because they won’t require a dedicated server.

NAS systems commonly employ RAID configurations to offer users a robust storage solution. In this respect NAS devices can be used in a similar manner as DAS devices (for robust data backup). The biggest, and most important, difference between NAS systems and DAS systems are that NAS systems contain at least one networking device between the end users and the NAS device(s).

NAS solutions are similar to another storage configuration called Storage Area Networks (SAN). The biggest difference between a NAS system and a SAN system is that a NAS device handles the file system functions of an operating system while a SAN system provides only block-based storage services and leaves the file system functions to be performed by the client computer.

Of course, that’s not to say that NAS can’t be employed in conjunction with SAN. In fact, large networks often employ SAN with NAS and DAS to meet the diverse needs of their network users.

One advantage that SAN systems have over NAS systems is that NAS systems are not as readily scalable. SAN systems can quite easily add servers in a cluster to handle more users. NAS systems employed in networks where the networks are growing rapidly are often incapable of handling the increase in traffic, even if they can handle the storage capacity.

This doesn’t mean that NAS systems are scalable. You can in fact, cluster NAS devices in a similar manner to how one would cluster servers in a SAN system. Doing this still allows full file access from any node in the NAS cluster. But just because something can be done, doesn’t mean it should be done; if you’re thinking of going down this path tread carefully – I would recommend implementing a SAN solution instead.


Figure 3: Diagram of a Storage Area Network (Courtesy of anildesai.net)

However, NAS systems are typically less expensive than SAN systems and in recent years NAS manufacturers have concentrated on expanding their presence on home networks where many users have high storage demands for multimedia files. For most home users a less expensive NAS system which doesn’t require a server and rack space is a much more attractive solution when compared with implementing a SAN configuration.

SAN systems have many advantages over NAS systems. For instance, it is quite easy to replace a faulty server in a SAN system whereas is it much more difficult to replace a NAS device which may or may not be clustered with other NAS devices. It is also much easier to geographically distribute storage arrays within a SAN system. This type of geographic distribution is often desirable for networks wanting a disaster tolerant solution.

The biggest advantage of SAN systems is that they offer simplified management, scalability, flexibility, and improved data access and backup. For this reason SAN configurations are becoming quite common for large enterprises that take their data storage seriously.

Apart from large networks SAN configurations are not very common. One exception to this is is in the video editing industries which require a high capacity storage environment along with a high bandwidth for data access. A SAN configuration using Fibre Channel is really the best solution for video editing networks and networks in similar industries.

While any of these three configurations (DAS, NAS, and SAN) can address the needs of most networks, putting a little bit of thought into the network design can save a lot of future effort as the network grows or the need arises to upgrade various aspects of the network. Choosing the right configuration is important, you need to choose a configuration that meets your networks current needs and any predictable needs of the near to medium term future.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Common Storage Configurations

Using the Microsoft Offline Virtual Machine Servicing Tool Version 2.1 with WSUS (Part 1)

Posted by Alin D on September 10, 2010

Introduction

In Part 1 of this article, you will learn how the Microsoft Offline Virtual Machine Servicing Tool 2.1 (OVMST 2.1) integrates with System Center Virtual Machine Manager 2008 R2 and Windows Software Update Services in order to update offline virtual machines. In Part 2 of the article, you will learn how to configure WSUS 3.0 SP2, VMM 2008 R2, OVMST 2.1, and virtual machine clients to perform offline virtual machine updates.

What is an Offline Virtual Machine?

One of the key components in an enterprise virtualization infrastructure is a repository of components that are used to efficiently and rapidly provision virtual machines. In Microsoft System Center Virtual Machine Manager 2008 R2 (VMM 2008 R2), the repository is called a VMM library. A VMM library stores components such as:

  • Hardware profiles
  • Guest operating system profiles
  • Virtual machine templates
  • Virtual hard disks
  • Virtual floppy disks
  • PowerShell scripts
  • Sysprep files
  • Offline virtual machines

An offline virtual machine is a Windows virtual machine that is stored in a VMM library in an exported state. An exported virtual machine consists of one or more virtual hard disks (VHDs) and a configuration file (.EXP file extension). The configuration file contains virtual machine settings in a format that Hyper-V can use to re-create the virtual machine through the import function. It is important to note that the virtual machine VHDs are not altered during the export process. Once exported, the offline virtual machine configuration file is stored in the VMM library database along with a link to the VHD files. The virtual machine VHDs are stored in a VMM library share.

The Problem with Offline Virtual Machines

The assumption that goes along with creating and storing an offline virtual machine in the VMM library is that it will be redeployed to a Hyper-V host (or Virtual Server 2005 R2 SP1) at some later point in time. Of course, if several weeks or months elapse before the offline virtual machine is redeployed; it will likely require several operating system and application patches to restore it to an updated state. In most enterprises today, only updated systems are allowed to be connected to the corporate network. Therefore, before deploying the virtual machine back into the production network, it would have to be deployed to a quarantined network to perform the updates. Although this is a feasible approach, it would be more desirable to periodically update offline virtual machines, so that when it is time to redeploy into production only a minimal number of updates are required (if any) to bring it up-to-date. Microsoft addressed this issue with the development of the Offline Virtual Machine Servicing Tool (OVMST) which provides automation of the offline virtual machine update process.

OVMST 2.1 Overview

OVMST 2.1 is a Microsoft Solution Accelerator product released in December 2009. It is available as a free download from the Microsoft website. OVMST 2.1 provides the ability to orchestrate the automated update of offline virtual machines stored in a VMM library when configured and integrated with System Center Virtual Machine Manager 2008 (or R2), System Center Configuration Manager 2007 (SP1, R2, or SP2), and/or Windows Software Update Server (WSUS) 3.0 SP1 or later version. Perhaps obvious, but still worth mentioning, this infrastructure requires Active Directory Domain Services (ADDS), and that servers and virtual machines are members of the AD domain.

OVMST 2.1 supports Hyper-V running on Windows Server 2008 SP2, Hyper-V R2 running on Windows Server 2008 R2, and Virtual Server 2005 R2 SP1. However, Virtual Server 2005 R2 SP1 cannot serve as a host for virtual machines exported from Hyper-V or Hyper-V R2 since the export format is incompatible.

In addition, OVMST 2.1 can orchestrate offline virtual machine updates for the following Windows guest operating systems:

  • Windows XP Professional SP2 (64-bit)
  • Windows XP Professional SP3 (32-bit)
  • Windows Server 2003 SP2 (32 and 64-bit)
  • Windows Server 2003 R2 SP2 (32 and 64-bit)
  • Windows Vista SP1 and SP2 (32 and 64-bit)
  • Windows Server 2008 SP2 (32 and 64-bit)
  • Windows Server 2008 R2 (64-bit)
  • Windows 7 (32 and 64-bit)

If Windows 7 or Windows Server 2008 R2 offline virtual machines need to be updated using OVMST 2.1 and WSUS, or in conjunction with System Center Config Mgr 2007 SP2, then WSUS 3.0 SP2 is a requirement. WSUS 3.0 SP2 is also available as a free download from the Microsoft website.

OVMST 2.1 Components

OVMST 2.1 is composed of a management console, a workflow engine, and a collection of scripts used by the workflow engine to perform the various tasks that are required during an update cycle. In order to execute processes on remote client virtual machines (offline virtual machines temporarily deployed on a Hyper-V host to perform updates), OVMST 2.1 relies on the use of the PsExec utility developed by Mark Russinovich, formerly from Winternals, and currently a Technical Fellow in the Platform and Services Division at Microsoft. The PsExec utility must be downloaded separately and installed on the same machine as the OVMST 2.1 application.

The OVMST 2.1 management console seen in Figure 1 is an MMC-based application that allows configuration of the tool, creation of virtual machine groups, assignment of virtual machines to virtual machine groups, as well as creation and scheduling of update servicing jobs.


Figure 1: OVMST 2.1 Management Console

OVMST 2.1 uses servicing jobs to manage the update operations. A servicing job combines configuration settings with Windows batch files, VB scripts, and Windows PowerShell cmdlets that make up a task managed by the Windows Task Scheduler. Specifically, a servicing job defines the following configuration settings:

  • Software update management system (System Center Config Mgr or WSUS)
  • Target offline virtual machines
  • Virtual network to connect virtual machines for updates
  • Hyper-V maintenance hosts to deploy the virtual machines
  • Account credentials with administrative permissions on the virtual machines
  • Execution schedule

A servicing job can target one or more offline virtual machines organized in virtual machine groups created within OVMST 2.1. A virtual machine group allows you to assign specific virtual machines to a collection that is then easily selected as the target of a specific servicing job.

Offline Virtual Machine Update Workflow

The main tasks that are performed during an OVMST 2.1 servicing job include the following steps:

  • Deploying a virtual machine from a VMM library to a Virtual Server or Hyper-V server identified as a maintenance host in System Center VMM
  • Configuring the virtual network settings
  • Powering on the virtual machine
  • Triggering the software update cycle using System Center Config Mgr or WSUS.
  • Monitoring the installation of updates and virtual machine reboots
  • Powering off the updated virtual machine
  • Exporting the virtual machine
  • Storing the virtual machine files back in the VMM library

Figure 2 represents a more detailed schematic of the servicing job workflow when using WSUS to perform offline virtual machine updates.


Figure 2: OVMST 2.1 Workflow with WSUS Integration

If you are interested in reviewing the actual scripts that are used to perform the various tasks described in Figure 2, you can find them %SystemDrive%Program FilesMicrosoft Offline Virtual Machine Servicing ToolScript after installation of the application on your VMM server.

As you can infer from this diagram, the update process is I/O intensive, requiring the transfer of potentially large virtual machine VHDs between the System Center VMM server and the maintenance hosts (Hyper-V or Virtual Server 2005 R2 SP1 servers). Therefore, in an environment with a large repository of offline virtual machines to update, best performance can be achieved using a storage area network (SAN) infrastructure, preferably with Fibre Channel connections.

Another important consideration is the networking configuration to use so that you can ensure isolation of the virtual machine clients during the update process. Even with the infrastructure components deployed in a production corporate network environment, you can configure and use a VLAN to secure the network traffic between the System Center VMM server, WSUS server, maintenance hosts, and the target virtual machines. Additionally, you must ensure that other services required during the update can also communicate across the VLAN (e.g., AD Domain Services).

Conclusion

In Part I of this article, you were introduced to the Microsoft Offline Virtual Machine Servicing Tool, Version 2.1 and how it can help you to resolve the problem of updating offline virtual machines stored in a VMM library. In Part II of the article, you will learn about OVMST 2.1 installation requirements, as well as obtain step-by-step procedures to install and configure OVMST 2.1, and configure and store target VMs as offline virtual machines in a VMM library. You will also learn how to create and monitor an OVMST 2.1 servicing job.

Posted in TUTORIALS, Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »