Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘Fibre Channel’

Windows 2012 – SMB 3.0 Review

Posted by Alin D on September 17, 2012

At its core, the resource sharing protocol SMB 3.0 allows users to connect to file shares and printers. But this latest version, new in Windows Server 2012, offers resource sharing enhancements that have big implications for Hyper-V admins.
SMB 3.0 doesn’t deviate from the basic construct: A Windows admin accesses remote files by typing in \ServernameSharename to get to the location on another server or workstation. The protocol still allows for file access, but it performs resource sharing in a new, multi-channeled and multithreaded way that allows for much faster and more reliable throughput as well as encryption. This change enables users to access much larger files in real time and at high speed.

However, if not architected to match your business needs and service level requirements, it will only create a performance bottleneck. By first examining your options for designing a Hyper-V infrastructure using the new SMB 3.0 protocol, and the implications for a test environment, you’ll be equipped to avoid potential bottlenecks.

What to expect with SMB 3.0

Using remote clustered shares with the SMB 3.0 protocol creates a transport channel that functions as a high-speed, threaded data channel for connecting to storage, similar to transport channels created with iSCSI or Fibre Channel. This viable third option for remote file access in Windows Server 2012 creates a simple way to design your Hyper-V virtual machine (VM) storage. It enables connections between Hyper-V Hosts running on Windows Server 2012 and a SMB share located on a Windows Server 2012 server or cluster of servers. The cluster nodes might use iSCSI or Fibre Channel as back storage. This configuration allows you the benefit of having multiple Hyper-V VMs pointed to the same share on the cluster that has high bandwidth connections.

Designing your infrastructure around the resource sharing protocol

In order to house your Hyper-V VM storage, configuration files and snapshots on a remote share location using SMB 3.0, you must run Hyper-V under Windows Server 2012. The remote share location must run Windows Server 2012 as well. The word from Microsoft at this point is that the code will not be backported to previous versions of Windows Server.

How you design a Hyper-V infrastructure around remote share Hyper-V VM storage will vary according to your storage I/O needs and the nature of the workload. The infrastructure could simply consist of a single server with local storage.

Or it could be more complex and consist of a multi-node cluster with multiple high bandwidth network interfaces. Your I/O, redundancy requirements and budget will all be important factors contributing to the end design.
SMB 3.0 in a test environment

For a limited test environment that does not require high availability, you only need a Hyper-V host and a secondary server with some onboard storage. For example, two desktops or two lower-end servers with single 1 Gigabit Ethernet connections could suffice, but to use SMB 3.0, they must both be able to run Windows Server 2012.

You could also expand and have multiple standalone test Hyper-V hosts as the processing power of the environment, all pointing back to one Windows Server 2012 server for your storage. This reduces the amount of distributed/wasted local storage, and simplifies provisioning additional transport technologies, such as iSCSI or Fibre Channel, on each Hyper-V host. This configuration would make the remote storage a single point of failure, but it would reduce costs and provide a very simple architecture.

Once you’ve evaluated SMB 3.0 in a test environment, you only need to go through a few more steps to make your infrastructure production-ready. Learn how in part two of this series.

Posted in Windows 2012 | Tagged: , , , , , , | Leave a Comment »

How to create a multi-site Hyper-V failover cluster

Posted by Alin D on June 21, 2011

Multi Site Hyper-V Cluster

Multi Site Hyper-V Cluster Diagram

Before you build a multi-site cluster for Hyper-V failover purposes, you have to take a look at the hardware and storage requirements for highly available Hyper-V environments. After connecting servers to the proper networking and shared storage equipment, building a multi-site cluster is a relatively simple process.

First, use the Validate a Configuration wizard in Failover Cluster Manager to run a series of tests. If everything passes, you can create a Hyper-V failover cluster.

Stretching a cluster across multiple sites isn’t an exceedingly complex process. Today’s Windows Failover Cluster service in Windows Server 2008 R2 already includes most of the necessary components for creating a multi-site cluster, also known as a “stretch cluster” or “GeoCluster.” The only missing component is a mechanism to replicate shared storage between sites.

Single-site clusters limit Hyper-V failover

To truly understand the utility of a multi-site cluster, consider the types of failures against which a single-site cluster protects. In this arrangement, the Hyper-V hosts connect to a piece of shared storage. The “shared” part of this storage is important, because any connections between servers and storage are limited to their maximum cable lengths. Both Fibre Channel and iSCSI storage have a maximum effective distance that limits how far you can physically spread your servers.

While this architecture is excellent for protecting virtual machines (VMs) against the loss of a single host, it does little when an entire site goes down. An outage can occur during a catastrophic event — such as a natural disaster, for example. Or more commonly, it can happen because of a site-wide problem, such as a network or power outage.

During a site-wide failure, a single-site cluster cannot protect against the loss of VM functionality, because the hosted VM and its processing, storage and networking reside at the same location. Therefore, all these components will experience a failure during a site-wide problem.

Creating a multi-site cluster for Hyper-V failover

A multi-site cluster protects cluster functionality by extending it to one or more additional physical locations. Using Windows Failover Clustering, the shared storage contents are copied to a secondary site.

Data replication for cluster storage is typically accomplished through synchronous or asynchronous replication. With synchronous replication, each piece of data that is replicated between the two interconnected storage area networks (SANs) must be confirmed at the secondary site before the next piece of data can be processed. This acknowledgement ensures that the data is transferred between storage devices, thus guaranteeing that the two SANs are always synchronized.

Synchronous replication is excellent for data preservation, but it comes at the cost of performance. Because each piece of transferred data must be acknowledged before the next data is processed, the transfer speed can quickly bottleneck overall performance.

Asynchronous replication circumvents this performance problem by allowing the sending SAN to queue up data that requires replication. Data is then sent in a batch at configurable intervals, with the entire batch acknowledged at once. So asynchronous replication does not pose the performance bottlenecks involved with synchronous replication, but when a site failure occurs, you risk losing some data.

Solutions for asynchronous replication are implemented as features within your SAN storage or as software add-ons to VMs or a Hyper-V host. Each approach comes with benefits and drawbacks. Ultimately, you must weigh the possibility of data loss against reduced system performance to decide which option is best.

Architecting and implementing replicated storage between your two sites is arguably the most difficult part of creating a multi-site cluster. Once the storage is correctly configured, you’ll find the remaining tasks are trivial in comparison, including provisioning additional Hyper-V hosts at the secondary site, adding them into the existing cluster, and configuring failover and other cluster settings to ensure that VMs migrate only during a full-site failure.

Two other considerations that require special attention with multi-site clusters are the reconfiguration of the cluster quorum and the reconvergence of name resolution at the secondary site.

Quorums and Hyper-V failover

A cluster by nature is always prepared for failure. At its core, a Hyper-V failover cluster always watches for components to go down and, when a failure occurs, knows which action to take.

One way the cluster facilitates this task is through a quorum. In essence, a quorum is a collection of cluster elements that determine whether there are enough resources available for the cluster to function.

Quorums use a “voting system” to decide whether a cluster should remain online. There are several ways to configure the voting process:

  • counting the number of votes cast by individual hosts;
  • counting votes from hosts plus shared storage; or
  • counting votes from hosts plus a file share witness in a third and separate site.

When creating a multi-site cluster, carefully consider your quorum options.

DNS resolution after Hyper-V failover

The final consideration for a multi-site cluster is the need for name resolution after a VM fails over from one site to another. Today’s Windows Failover Cluster service has the ability to span subnets (and IP address ranges). This process simplifies a cluster installation, because the network subnets no longer have to span between sites. But when failed-over VMs relocate to a new IP address range, the move complicates name resolution.

In short, when your VMs fail over from a primary site to a secondary site, they fail over to the secondary site’s IP address scheme. As a result, the IP address configuration for these VMs must be reconfigured at the time of failover. Also, clients must flush their local domain name server (DNS) cache to receive the server’s new address information.

Setting up virtual servers to use the Dynamic Host Configuration Protocol for address configuration simplifies their configuration update. For clients, this problem can be resolved by a reboot, clearing their cache with the ipconfig or flushdns commands, or by minimizing the time-to-live  setting for server DNS entries.

While multi-site Hyper-V failover clusters have special requirements for storage and data replication, the extension of your Windows Failover Cluster should not be difficult to set up. With the right technology and good planning, you can extend your Hyper-V high availability to protect against a full-site disaster.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

Windows 2008 Hyper-V Storage Components Configuration

Posted by Alin D on December 15, 2010

Introduction

Windows Server 2008 supports several different types of storage. You can either connectto storage physically, or by using a virtual hard drive (VHD).When Hyper-V is installed on a host, it can access the many different storage optionsthat are available to it, including direct attached storage (DAS, such as SATA or SAS) orSAN storage (FC, or iSCSI). Once you connect the storage solution to the parent partition,you can make it available to the child partition in a number of ways.

Hyper-V Storage Options

Windows Server 2008 with Hyper-V supports the use of direct attached storage, NAS, iSCSI, and Fibre Channel storage.

VHD or Pass-through Disk

A virtual hard drive (VHD) can be created on the parent partition’s volume with access granted to the child partition. The VHD operates as a set of blocks, stored as a regular file using the host OS file system (which is NTFS).

Within Hyper-V there are different types of VHDs, including fixed size, dynamically expanding, and “differencing” disks:

  • Dynamically expanding This type of virtual hard drive starts small but automatically expands to react to need. It will expand up to the maximum size indicated when the virtual hard drive is created. “Dynamic,” in this context, is sort of a misnomer. Dynamic implies that the size of the virtual drive changes up and down based on need. But in actuality, the hard drive will keep expanding until it reaches the maximum limit. If you remove content from the virtual hard drive, it will not shrink to meet the new, smaller capacity.

In Hyper-V, you can expose a host disk to the guest without putting a volume on it by using a pass-through disk. Hyper-V allows you to bypass the host’s file system and access the disk directly. This disk is not limited to 2,040 GB and can be a physical hard drive on the host or a logical one on a SAN.

Hyper-V ensures that the host and guest are not trying to use the disk at the same time by setting the drive to be in the offline state for the host.  Pass-through disks have their downsides. You lose some VHD-related features, like VHD snapshots and dynamically expanding VHDs.

IDE or SCSI on the Guest

Configuring the child partition’s virtual machine settings requires you to choose how the disk will be shown to the guest (either as a VHD file or pass-through disk). The child partition can see the disk as either a virtual ATA device or as a virtual SCSI disk. But you do not have to expose the drive to the child partition the same way as you exposed it to the parent partition. For example, a VHD file on a physical IDE disk on the parent partition can be shown as a virtual SCSI on the guest.

What you need to decide is what capabilities you want on the guest. You can have up to four virtual  IDE drives on the guest, but they are the only type the virtualized BIOS will boot from. You can have up to 256 virtual SCSI drives on the child partition, but you cannot boot from them.

You can also expose drives directly to the child partition by using iSCSI. This bypasses the parent partition completely. All you have to do is load an iSCSI initiator in the child partition and configure your partition accordingly.

Hyper-V does not support booting to iSCSI, so you still need another boot drive.

Fibre Channel

A Fibre Channel Storage Area Network (SAN) is the most widely deployed storage solution in enterprises today. SANs came into popularity in the mid- to late 90s and have had huge growth numbers with the boom of data that customers are keeping and using every day. Ultimately, a SAN is really just a network that is made up of storage components, a method of transport, and an interface. A SAN is made up of a disk array, a switch, and a host bus adapter. Most hardware providers such as HP, Dell, and SUN have a SAN solution and there are companies such as EMC, Hitachi, NetApp, and Compellent that focus primarily on SAN and storage technology.

Hyper-V Storage

Hyper-V Storage

Fibre Channel Protocol

The Fibre Channel Protocol is the mechanism used to transmit data across Fibre Channel networks. There are three main fabric topologies that are used in Fibre Channel: pointto-point, arbitrated loop, and fabric.

Point-to-Point

Point-to-point topology is a direct connection between two ports, with at least one of the ports acting as a server. This topology needs no  arbitration for the storage media because of separate links for transmission and reception. The downside is that it is limited to two nodes and is not scalable.

Arbitrated Loop

Arbitrated loop topology combines the advantages of the fabric topology (support for multiple devices) with the ease of operation of point-to-point topology. In the arbitrated loop topology, devices are connected to a central hub, like Ethernet LAN hubs. The Fibre Channel hub arbitrates (or shares) access to devices, but adds no additional functionality beyond acting as a centralized connection point.

Within the arbitrated loop category, there are two types of topologies:

  • Arbitrated loop hub Devices must seize control of the loop and then establish a point-to-point connection with the receiving device. When the transmission has ended, devices connected to the hub begin to arbitrate again. In this topology, there can be 126 nodes connected to a single link.
  • Arbitrated loop daisy-chain Devices are connected in series and the transmit port of one device is connected to the receive port on the next device in the daisy chain. This topology is ideal for small networks but is not very scalable, largely because all devices on the daisy chain must be on. And if one device fails, the entire network goes down.

Fabric

The fabric topology is composed of one or more Fibre Channel switches connected through one or more ports. Each switch typically contains 6, 16, 32, or 64 ports.

Disk Array

The SAN disk array is a grouping of hard disks that are in some form of RAID configuration and, just as with any RAID configuration, the more disks you have, the more I/O you will receive.

In most modern disk arrays, the disk controller (the component that controls RAID configuration and access to the disk array) will have some form of cache built into it. The cache on the disk controller is used to increase performance of the disk subsystem for both reads and writes. In the case of virtualization, the more cache available, the better performance you will see from your virtual machines.

Fibre Channel Switch

A Fibre Channel switch is a networking switch that uses the Fibre Channel protocol and is the backbone of the storage area network fabric. These switches can be implemented as one switch or many switches (for redundancy and scalability) to provide many-tomany communications between nodes on the SAN fabric. The Fibre Channel switch uses zoning to segregate traffic between storage devices and endpoint nodes. This zone can

be used to allow or deny a system access to a storage device. In the past few years three companies have really cornered the market on Fibre Channel switches: Cisco Systems, QLogic, and Brocade. When purchasing a switch, pay close attention to the back plane bandwidth that it has as well as how fast the ports are. You can get a Fibre Channel switch that supports port speeds of 2, 4, or 8 gigabits per second.

Tiered Storage

Using a technique called tiered storage, you assign different categories of your data to different types of storage media, in order to reduce total storage cost. Categories may be based on the levels of protection needed, performance issues, frequency of access, or whatever other considerations you have.

Because assigning data to a particular form of media is an ongoing and complex activity, some vendors provide software that automatically manages the process based on your organization’s policies.

For example, at tier 1, mission-critical or frequently accessed files are stored on highcapacity, fast-spinning hard drives. They might also have double-level RAIDs on them.

At tier 2, less important data is stored on less expensive, slower-spinning drives in a conventional SAN. As the tiers progress, the media gets slower and less expensive. As such, tier 3 of a three-tier system might contain rarely used or archived material. The lowest level of the tiered system might be simply putting the data onto DVD-ROMs.

SAN Features

Today SANs come with features that can really help your enterprise manage your data and your virtual machines.

iSCSI

iSCSI is a type of SAN that uses industry technologies such as Ethernet and Ethernet NICs for transport and an interface. So far iSCSI has proven to be a great lower-cost solution to the Fibre Channel SAN solutions that are available. If, for instance, you are building your virtualization environment on servers that are worth U.S. $4,000 and you want to connect them to your Fibre Channel SAN, you will have to purchase Fibre Channel adapter cards that are compatible with your SAN. Each one of the Fibre Channel cards you implement into your server can range from $1,000 to $2,000 each. That can really start to be an additional cost if you are building out one or more 16-node clusters.

Now we’re sure some of you reading this are saying, “Well yeah, but I’ve already implemented my Fibre Channel architecture. Now you want me to implement another infrastructure just for this?”

Well, no, that’s not the case. What we’re saying is that if you don’t already have an infrastructure in place, iSCSI would be a great technology for you to research. One other consideration you will want to keep in mind when you are determining whether or not to implement iSCSI is whether you want to implement iSCSI on separate physical switches than your production network. You can do this, but we don’t recommend it. The reason we don’t recommend this is that an iSCSI network can become saturated extremely quickly and use all of the bandwidth on your back plane, creating network contention for your production services.

Direct Attached Storage

Direct attached storage (DAS) is a storage array that is directly connected, rather than connected via a storage network. The DAS is generally used for one or more enclosures that hold multiple disks in a RAID configuration.

DAS, as the name suggests, is directly connected to a machine, and is not directly accessible to other devices. For an individual computer user, the hard drive is the most common form of DAS. In an enterprise, providing storage that can be shared by multiple computers tends to be more efficient and easier to manage.

The main protocols used in DAS are ATA, SATA, SCSI, SAS, and Fibre Channel. A typical DAS system is made of one or more enclosures holding storage devices such as hard disk drives, and one or more controllers. The interface with the server or the workstation is made through a host bus adapter.

NAS

Network attached storage (NAS) is a hardware device that contains multiple disks in a RAID configuration that purely provides a file system for data storage and tools to manage that data and access. An NAS device is very similar to a traditional file server, but the operating system has been stripped down and optimized for file serving to a heterogeneous environment. To serve files to both UNIX/Linux and Microsoft Windows, most NAS devices support NFS and the SMB/CIFS protocols.

NAS is set up with its own network address. By removing storage access and its management from the server, both application programming and files can be served faster, because they are not competing for the same processor resources. NAS is connected to the LAN and assigned an IP address. File requests are mapped by the main server to the NAS server.

NAS can be a step toward and included as part of a more sophisticated SAN. NAS software can usually handle a number of network protocols, including Microsoft’s Internetwork Packet Exchange and NetBEUI and Novell’s Netware configuration.

Posted in Windows 2008 | Tagged: , , , , , , , , , , | Leave a Comment »

Availability and recovery options when running Exchange 2010 in a virtual environment

Posted by Alin D on September 27, 2010

Virtual servers can benefit an organization’s data crunching needs in many ways. One of them is leveraging their native benefits to broaden the availability and recovery options for Microsoft Exchange 2010 deployments.

Most administrators can cite the benefits of virtual machines by rote:

* They’re portable so Exchange need no longer be bound to a particular piece of hardware. That means design decisions don’t need to be permanent. CPU and memory requirements can be changed with a reconfiguration and reboot. What’s more, new hardware can be be more easily accommodated because the virtual machine containing Exchange can be simply transferred to the new machine.

* They’re hardware independent so planners have greater design flexibility putting together the production as well as the disaster recovery components of a system.

Some virtual machine vendors, like VMware, have included robust availability features into their software. For example, the company’s High Availability product can act as a first line of defense against server failure. If a physical server or any critical component in a server goes down or fails, HA will automatically reboot the Exchange virtual machine on another physical server.

Another VMware product, Distributed Resource Scheduler, is designed to automatically manage workloads for virtual machines on a network. Better management of demand on a network means less latency and happier users. For example, if a virtual machine becomes bottlenecked, DRS can automatically move it to another host with more resources. Better yet, it can do that without subjecting the system to downtime.

The product can also speed recovery from hardware failures. For instance, after HA addresses a breakdown in a physical server by moving an Exchange virtual machine to another physical server, it’s DRS that migrates the Exchange VM back to its original home after it’s fixed, once again without downtime or any hiccups to the system’s users.

Running Exchange in a virtual environment can increase the availability of the program across its lifecycle. Virtualized Exchange can easily recover from planned or unplanned hardware outages, from hardware degradation by better load management and from application failure by using Microsoft Cluster Service   within a virtual machine.

In addition, the architecture of virtual machines has multi-pathing capabilities and advanced queueing techniques that can be leveraged in a virtual Exchange environment to improve network performance. For instance, they can be used to increase IOPS transactions, which will allow more clients to be served. Those technologies can also be used to balance the workloads of multiple Exchange servers that are sharing the same physical server to use multiple SAN paths and storage processor ports.

An added bonus to locating Exchange on a Virtual Machine File Server is avoidance of SAN errors. That’s because the VMFS hides SAN errors from guest operating systems.

Upgrades can be a bear in Exchange environments. Not only are they complicated to perform, but they can produce downtime which doesn’t produce happy faces in an organization.

A typical upgrade involves allocating engineering resources–including application, server and SAN administration–for planning and implementation, sizing and acquisition of new hardware and, of course, the downtime to perform the upgrade.

Compare that to an upgrade in a virtual environment. Scaling up your Exchange environment, for instance, is as easy as adding more Exchange virtual machines as your client base grows.

When Exchange is running on a physical server it’s tightly bound to a storage technology and can be very challenging to scale. Adding more storage to an Exchange virtual machine, however, can be easier. VMware’s vSphere software, for example, treats the new storage as a simple SCSI device. That means regardless of the storage technology–SCSI or Fibre Channel–the Exchange environment can be upgraded without a sneeze.

Changing the storage capacity for Exchange when it’s running on a physical server can be difficult, too. Not so in the virtual environment. With VMware’s Virtual Machine File System, for instance, storage capacities to Exchange virtual machines can be changed on the fly with its hot add/remove storage feature.

As VMware notes in a recent white paper on availability and recovery options when running Exchange 2010 in a virtual environment: “Although application-level clustering has been the prevalent solution for most Exchange implementations, features of the vSphere platform can enhance the overall availability of Exchange by providing options
that help to limit both planned and unplanned downtime.”

“In fact,” the company added, “for many organizations, the features provided by vSphere may satisfy the availability requirements of their business without needing to follow traditional clustering approaches.”

As for organizations with high availability requirements, VMware notes, “application-level clustering can be combined with the vSphere features to create an extremely flexible environment, with options for failover and recovery at both the hardware and application levels.”

Posted in Exchange | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »

Common Storage Configurations

Posted by Alin D on September 20, 2010

Introduction

In today’s world everything is on computers. More specifically, everything is stored on storage devices which are attached to computers in a number of configurations. There are many ways in which these devices can be accessed by users. Some are better than others and some are best for certain situations; in this article I will give an overview of some of these ways and describe some situations where one might want to implement them.

Firstly there is an architecture called Directly Attached Storage (DAS). This is what most people would think of when they think of storage devices. This type of architecture includes things like internal hard drives, external hard drives, and USB keys. Basically DAS refers to anything that attaches directly to a computer (or a server) without any network component (like a network switch) between them.


Figure 1: Three configurations for Direct Attached Storage solutions (Courtesy of ZDNetasia.com)

A DAS device can even accommodate multiple users concurrently accessing data. All that is required is that the device have multiple connection ports and the ability to support concurrent users. DAS configurations can also be used in large networks when they are attached to a server which allows multiple users to access the DAS devices. The only thing that DAS excludes is the presence of a network device between the storage device and the computer.

Many home users or small businesses require Network Attached Storage (NAS). NAS devices offer the convenience of centrally locating your storage devices, though not necessarily located with your computers. This feature is convenient for home users who may want to store their storage devices in their basement while roaming about their house with their laptop. This feature is equally appealing to small businesses where it may not be appropriate to have large storage devices where clients or customers present. DAS configurations could also provide this feature, though not as easily or elegantly for smaller implementations.


Figure 2: Diagram of a Network Attached Storage system (Courtesy of windowsnas.com)

A NAS device is basically a stripped down computer. Though they don’t have monitors or keyboards they do have stripped down operating systems which you can configure, usually by connecting to the device via a web browser from a networked computer. NAS operating systems are typically stripped down versions of UNIX operating systems, such as the open source FreeNAS which is a stripped down version of FreeBSD. FreeNAS supports many file formats such as CIFS, FTP, NFS, TFTP, AFP, RSYNC, and iSCSI. Since FreeNAS is open source you’re also free to add your own implementation of any protocol you wish. In a future article I will provide more in-depth information on these protocols; so stay tuned.

Because NAS devices handle the file system functions themselves, they do not need a server to handle these functions for them. Networks that employ DAS devices attached to a server will require the server to handle the file system functions. This is another advantage of NAS over DAS. NAS “frees up” the server to do other important processing tasks because a NAS device is connected directly to the network and handles all of the file serving itself. This also means that a NAS device can be simpler to configure and maintain for smaller implementations because they won’t require a dedicated server.

NAS systems commonly employ RAID configurations to offer users a robust storage solution. In this respect NAS devices can be used in a similar manner as DAS devices (for robust data backup). The biggest, and most important, difference between NAS systems and DAS systems are that NAS systems contain at least one networking device between the end users and the NAS device(s).

NAS solutions are similar to another storage configuration called Storage Area Networks (SAN). The biggest difference between a NAS system and a SAN system is that a NAS device handles the file system functions of an operating system while a SAN system provides only block-based storage services and leaves the file system functions to be performed by the client computer.

Of course, that’s not to say that NAS can’t be employed in conjunction with SAN. In fact, large networks often employ SAN with NAS and DAS to meet the diverse needs of their network users.

One advantage that SAN systems have over NAS systems is that NAS systems are not as readily scalable. SAN systems can quite easily add servers in a cluster to handle more users. NAS systems employed in networks where the networks are growing rapidly are often incapable of handling the increase in traffic, even if they can handle the storage capacity.

This doesn’t mean that NAS systems are scalable. You can in fact, cluster NAS devices in a similar manner to how one would cluster servers in a SAN system. Doing this still allows full file access from any node in the NAS cluster. But just because something can be done, doesn’t mean it should be done; if you’re thinking of going down this path tread carefully – I would recommend implementing a SAN solution instead.


Figure 3: Diagram of a Storage Area Network (Courtesy of anildesai.net)

However, NAS systems are typically less expensive than SAN systems and in recent years NAS manufacturers have concentrated on expanding their presence on home networks where many users have high storage demands for multimedia files. For most home users a less expensive NAS system which doesn’t require a server and rack space is a much more attractive solution when compared with implementing a SAN configuration.

SAN systems have many advantages over NAS systems. For instance, it is quite easy to replace a faulty server in a SAN system whereas is it much more difficult to replace a NAS device which may or may not be clustered with other NAS devices. It is also much easier to geographically distribute storage arrays within a SAN system. This type of geographic distribution is often desirable for networks wanting a disaster tolerant solution.

The biggest advantage of SAN systems is that they offer simplified management, scalability, flexibility, and improved data access and backup. For this reason SAN configurations are becoming quite common for large enterprises that take their data storage seriously.

Apart from large networks SAN configurations are not very common. One exception to this is is in the video editing industries which require a high capacity storage environment along with a high bandwidth for data access. A SAN configuration using Fibre Channel is really the best solution for video editing networks and networks in similar industries.

While any of these three configurations (DAS, NAS, and SAN) can address the needs of most networks, putting a little bit of thought into the network design can save a lot of future effort as the network grows or the need arises to upgrade various aspects of the network. Choosing the right configuration is important, you need to choose a configuration that meets your networks current needs and any predictable needs of the near to medium term future.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Common Storage Configurations

Windows Server 2008 R2 High Availability Technologies

Posted by Alin D on September 19, 2010

Since the inception of Windows NT, Microsoft has been pursuing the goal of extending its reach from personal computing to enterprise markets. One of the important elements of this strategy was a strive toward high availability, leading to the development of server clustering technology. In recent years, Windows’ capabilities have been extended to incorporate a virtualization platform, gaining extra momentum following the release of Windows Server 2008 and its Hyper-V component.

That momentum, however, was somewhat hampered by unfavorable comparisons with products from competing vendors.

The most commonly noted shortcoming was the inability to failover virtual guests without incurring downtime as is achievable with VMware’s VMotion. This gap was subsequently eliminated with the introduction of Live Migration and Cluster Shared Volumes (CSV) in Windows Server 2008 R2.

Live Migration

Live Migration is a new feature incorporated into clustered implementation of Windows Server 2008 R2-based Hyper-V, which makes it possible to move guest operating systems between cluster nodes without noticeable downtime, which parallels the functionality of VMware’s VMotion. Effectively, virtual machines (VMs) remain accessible to external clients and applications throughout the entire migration process, although their hosts change. This constitutes a significant improvement over Quick Migration available in Windows Server 2008 based clustered Hyper-V implementations where a similar process resulted in a temporary downtime.

The “live”aspect of the migration is accomplished through a procedure that copies iterative memory pages (referred to as working set) used by the VM being migrated over a dedicated Live Migration network from a source to a target Hyper-V host. This is repeated several times for any pages that changed during the preceding iteration to minimize working set differences between two VM memory-resident instances.

The final iteration includes a status of registers and virtualized devices, followed by handles to the VM’s storage (such as VHD files or pass-through disks). Once the transfer of all resources is completed, the VM is momentarily paused, and remaining pages are copied to the target. At that point, the new VM instance is brought online on the target host. References to it are removed from the source. Finally, RARP packets are sent by the migrated VM to ensure switches are informed about new ports associated with its IP address. A momentary downtime is not noticeable as long as the final steps of this sequence do not exceed the span of a TCP session timeout. Their duration is dependent primarily on the available bandwidth of the Live Migration network and are a reflection of how active the migrated VM is.

It is important to point out that neither Live Migration nor VMotion in any way remediate outages caused by a failure of a host where VMs reside. In such cases, guest operating systems remain inaccessible until their automatic restart is completed on another cluster node. However, this is expected and should not diminish the appreciation of benefits delivered by both of these technologies. Most importantly, Live Migration practically eliminates the need for a maintenance window of Hyper-V hosts. It also facilitates the concept of dynamic data centers, where virtual resources are relocated between hosts to optimize their use. It is possible to automate this process by leveraging VM provisioning and intelligent placement provided by Microsoft System Center Virtual Machine Manager 2008 R2.

Cluster Shared Volumes

Cluster Shared Volumes (CSVs) was designed to allow shared access to the same LUN (an acronym derived from the term Logical Unit Number, which, in Windows parlance, corresponds to a disk mounted on the local host without regard for its physical structure) by multiple cluster nodes. This represents a drastic departure from the traditional “share-nothing” Microsoft clustering model, where only a single host was permitted to carry out block I/O operations against a given disk.

Interestingly, this achievement was made possible without reliance on clustered file system (implemented by VMware and available on Windows platform with assistance from third-party products, such as Sanbolic’s Melio FS). Instead, CSV works with any shared NTFS formatted volume — so long as the underlying hardware and software components comply with Windows Server 2008 R2 based Failover Clustering requirements.

To prevent disk corruption resulting from having multiple nodes accessing the same LUN, one of them (referred to as Coordinator and implemented as the CSVFilter.sys file system mini-filter driver) arbitrates I/O requests targeting individual VMs. It provides addresses of disk areas to which owners of these VMs are permitted to write directly. At the same time, the Coordinator node is solely responsible for locking semantics and carrying out changes affecting file system metadata (such as creating and deleting individual files or modifying their attributes).

Since CSVs contain a small number of files, in general such activities are relatively rare and constitute a small portion of overall disk activity. In some cases, however, you might want to initiate certain I/O-intensive operations not suitable for direct access (e.g., Server Message Block-based file copies, chkdsk, defragmentation or host-based backups) from the Coordinator node to maximize their speed. To determine which host functions as the Coordinator node, identify the owner of the Physical Disk clustered resource corresponding to the LUN where the CSV is located. Incidentally, this architectural design gives you an additional level of resiliency, maintaining the availability of CSV-hosted VMs even if the connectivity to the underlying storage from their host is lost. At that point, direct I/O traffic is automatically rerouted via the Coordinator node. In such cases, however, performance is likely to suffer due to the overhead of SMB communication between the nodes.

Despite this rather common misconception, CSVs are not required for Live Migration to function. It is possible to use this feature with VMs hosted on any Physical Disk resource in a Windows Server 2008 R2-based Hyper-V cluster. However, it is highly recommended to combine benefits provided by each of these technologies. This way, you are able not only to perform independent failover of VMs stored on the same LUN but also to minimize the timeout during the final stage of the migration. This is critical from a high-availability standpoint since the use of CSV eliminates delay associated with changing disk ownership that takes place in a traditional failover scenario.

Implementing Live Migration

To implement Live Migration, you must satisfy the following requirements:

  • Install a multi-node failover cluster consisting of between 2 and 16 nodes running either Windows Server 2008 R2 Enterprise, Windows Server 2008 R2 Datacenter or Microsoft Hyper-V Server 2008 R2. Although OS editions do not have to match, it is not possible to mix full and core instances. Microsoft Hyper-V Server 2008 R2 is considered to be the latter.
  • Configure iSCSI or Fibre Channel storage shared by all nodes.
  • Separate a network subnet shared by all nodes dedicated to Live Migration, with the bandwidth of 1Gbps or higher, preferably via adapters (both physical and VM-based) configured with support for Jumbo Frames, TCP Chimney and Virtual Machine Queue. These capabilities have been introduced in Windows Server 2008 R2 based Hyper-V. Effectively, each cluster node should have at least five network adapters to accommodate private, public, Hyper-V management and redirected CSV I/O traffic. This number increases if you intend to use iSCSI-based storage and want to provide some level of redundancy. In addition, Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks components should be enabled on adapters connected to the CSV network. To designate, assign to it the lowest value of Metric parameter using Get-ClusterNetwork PowerShell cmdlet. Conversely, disable these components on adapters intended for Live Migration. This is configurable via the Network for Live Migration tab of VM’s Properties dialog box in the Failover Cluster Manager interface.
  • All nodes must either have the matching processor type or use Processor Compatibility Mode (available starting with Windows Server 2008 R2) to disable processor features not supported cluster-wide. Despite the additional flexibility this feature provides, the basic requirement for consistent processor architecture still must be satisfied (i.e., you cannot mix servers running AMD and Intel processors). Keep in mind that Processor Compatibility Mode might cause some applications to work in a substandard manner or even fail.

CSV takes the form of the %SystemDrive%ClusterStorage folder appearing on all cluster nodes. Each new LUN added to it is represented by a subfolder, named by default Volumex (where x is a positive, sequentially assigned integer). When creating VMs using CSV functionality, all of their components, including configuration files and VHDs corresponding to dynamically expanding, fixed-sized or differencing volumes (CSV does not support pass-through disks) must reside within one of these volumes. When configuring new VMs, this happens automatically as long as you point to the appropriate Volumex subfolder of the %SystemDrive%ClusterStorage folder on the Specify Name and Location and Connect Virtual Hard Disk pages of the New Virtual Machine Wizard. Subsequently, to provide high availability capabilities for the newly created VMs, they must be added as VMs using the High Availability Wizard accessible via Configure a Service or Application link in the Failover Cluster Manager console. Alternatively, can combine both of these steps by using Virtual Machines… New Virtual Machine links in the context sensitive menu of Services and applications node.

Once implemented, Live Migration can be initiated manually via the management interface of Failover Cluster Manager and Microsoft System Center Virtual Machine Manager 2008 R2 or through PowerShell cmdlets (on which the tasks carried out by SCVMM 2008 R2 are based) and corresponding Windows Management Instrumentation scripts. It is also possible to automate its execution by leveraging PRO Tip functionality of SCVMM 2008 R2 or trigger it whenever a cluster node is placed in the maintenance mode.

Just remember that any cluster node supports only a single Live Migration session (incoming or outgoing) at the time. Effectively, this means the total number of simultaneous migrations is limited by the number of cluster nodes (equal to 1/2 of their total count). It is also worth noting that CSV technology introduces operational challenges, backup in particular, which should be carefully considered before you decide to implement it.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Using the Microsoft Offline Virtual Machine Servicing Tool Version 2.1 with WSUS (Part 1)

Posted by Alin D on September 10, 2010

Introduction

In Part 1 of this article, you will learn how the Microsoft Offline Virtual Machine Servicing Tool 2.1 (OVMST 2.1) integrates with System Center Virtual Machine Manager 2008 R2 and Windows Software Update Services in order to update offline virtual machines. In Part 2 of the article, you will learn how to configure WSUS 3.0 SP2, VMM 2008 R2, OVMST 2.1, and virtual machine clients to perform offline virtual machine updates.

What is an Offline Virtual Machine?

One of the key components in an enterprise virtualization infrastructure is a repository of components that are used to efficiently and rapidly provision virtual machines. In Microsoft System Center Virtual Machine Manager 2008 R2 (VMM 2008 R2), the repository is called a VMM library. A VMM library stores components such as:

  • Hardware profiles
  • Guest operating system profiles
  • Virtual machine templates
  • Virtual hard disks
  • Virtual floppy disks
  • PowerShell scripts
  • Sysprep files
  • Offline virtual machines

An offline virtual machine is a Windows virtual machine that is stored in a VMM library in an exported state. An exported virtual machine consists of one or more virtual hard disks (VHDs) and a configuration file (.EXP file extension). The configuration file contains virtual machine settings in a format that Hyper-V can use to re-create the virtual machine through the import function. It is important to note that the virtual machine VHDs are not altered during the export process. Once exported, the offline virtual machine configuration file is stored in the VMM library database along with a link to the VHD files. The virtual machine VHDs are stored in a VMM library share.

The Problem with Offline Virtual Machines

The assumption that goes along with creating and storing an offline virtual machine in the VMM library is that it will be redeployed to a Hyper-V host (or Virtual Server 2005 R2 SP1) at some later point in time. Of course, if several weeks or months elapse before the offline virtual machine is redeployed; it will likely require several operating system and application patches to restore it to an updated state. In most enterprises today, only updated systems are allowed to be connected to the corporate network. Therefore, before deploying the virtual machine back into the production network, it would have to be deployed to a quarantined network to perform the updates. Although this is a feasible approach, it would be more desirable to periodically update offline virtual machines, so that when it is time to redeploy into production only a minimal number of updates are required (if any) to bring it up-to-date. Microsoft addressed this issue with the development of the Offline Virtual Machine Servicing Tool (OVMST) which provides automation of the offline virtual machine update process.

OVMST 2.1 Overview

OVMST 2.1 is a Microsoft Solution Accelerator product released in December 2009. It is available as a free download from the Microsoft website. OVMST 2.1 provides the ability to orchestrate the automated update of offline virtual machines stored in a VMM library when configured and integrated with System Center Virtual Machine Manager 2008 (or R2), System Center Configuration Manager 2007 (SP1, R2, or SP2), and/or Windows Software Update Server (WSUS) 3.0 SP1 or later version. Perhaps obvious, but still worth mentioning, this infrastructure requires Active Directory Domain Services (ADDS), and that servers and virtual machines are members of the AD domain.

OVMST 2.1 supports Hyper-V running on Windows Server 2008 SP2, Hyper-V R2 running on Windows Server 2008 R2, and Virtual Server 2005 R2 SP1. However, Virtual Server 2005 R2 SP1 cannot serve as a host for virtual machines exported from Hyper-V or Hyper-V R2 since the export format is incompatible.

In addition, OVMST 2.1 can orchestrate offline virtual machine updates for the following Windows guest operating systems:

  • Windows XP Professional SP2 (64-bit)
  • Windows XP Professional SP3 (32-bit)
  • Windows Server 2003 SP2 (32 and 64-bit)
  • Windows Server 2003 R2 SP2 (32 and 64-bit)
  • Windows Vista SP1 and SP2 (32 and 64-bit)
  • Windows Server 2008 SP2 (32 and 64-bit)
  • Windows Server 2008 R2 (64-bit)
  • Windows 7 (32 and 64-bit)

If Windows 7 or Windows Server 2008 R2 offline virtual machines need to be updated using OVMST 2.1 and WSUS, or in conjunction with System Center Config Mgr 2007 SP2, then WSUS 3.0 SP2 is a requirement. WSUS 3.0 SP2 is also available as a free download from the Microsoft website.

OVMST 2.1 Components

OVMST 2.1 is composed of a management console, a workflow engine, and a collection of scripts used by the workflow engine to perform the various tasks that are required during an update cycle. In order to execute processes on remote client virtual machines (offline virtual machines temporarily deployed on a Hyper-V host to perform updates), OVMST 2.1 relies on the use of the PsExec utility developed by Mark Russinovich, formerly from Winternals, and currently a Technical Fellow in the Platform and Services Division at Microsoft. The PsExec utility must be downloaded separately and installed on the same machine as the OVMST 2.1 application.

The OVMST 2.1 management console seen in Figure 1 is an MMC-based application that allows configuration of the tool, creation of virtual machine groups, assignment of virtual machines to virtual machine groups, as well as creation and scheduling of update servicing jobs.


Figure 1: OVMST 2.1 Management Console

OVMST 2.1 uses servicing jobs to manage the update operations. A servicing job combines configuration settings with Windows batch files, VB scripts, and Windows PowerShell cmdlets that make up a task managed by the Windows Task Scheduler. Specifically, a servicing job defines the following configuration settings:

  • Software update management system (System Center Config Mgr or WSUS)
  • Target offline virtual machines
  • Virtual network to connect virtual machines for updates
  • Hyper-V maintenance hosts to deploy the virtual machines
  • Account credentials with administrative permissions on the virtual machines
  • Execution schedule

A servicing job can target one or more offline virtual machines organized in virtual machine groups created within OVMST 2.1. A virtual machine group allows you to assign specific virtual machines to a collection that is then easily selected as the target of a specific servicing job.

Offline Virtual Machine Update Workflow

The main tasks that are performed during an OVMST 2.1 servicing job include the following steps:

  • Deploying a virtual machine from a VMM library to a Virtual Server or Hyper-V server identified as a maintenance host in System Center VMM
  • Configuring the virtual network settings
  • Powering on the virtual machine
  • Triggering the software update cycle using System Center Config Mgr or WSUS.
  • Monitoring the installation of updates and virtual machine reboots
  • Powering off the updated virtual machine
  • Exporting the virtual machine
  • Storing the virtual machine files back in the VMM library

Figure 2 represents a more detailed schematic of the servicing job workflow when using WSUS to perform offline virtual machine updates.


Figure 2: OVMST 2.1 Workflow with WSUS Integration

If you are interested in reviewing the actual scripts that are used to perform the various tasks described in Figure 2, you can find them %SystemDrive%Program FilesMicrosoft Offline Virtual Machine Servicing ToolScript after installation of the application on your VMM server.

As you can infer from this diagram, the update process is I/O intensive, requiring the transfer of potentially large virtual machine VHDs between the System Center VMM server and the maintenance hosts (Hyper-V or Virtual Server 2005 R2 SP1 servers). Therefore, in an environment with a large repository of offline virtual machines to update, best performance can be achieved using a storage area network (SAN) infrastructure, preferably with Fibre Channel connections.

Another important consideration is the networking configuration to use so that you can ensure isolation of the virtual machine clients during the update process. Even with the infrastructure components deployed in a production corporate network environment, you can configure and use a VLAN to secure the network traffic between the System Center VMM server, WSUS server, maintenance hosts, and the target virtual machines. Additionally, you must ensure that other services required during the update can also communicate across the VLAN (e.g., AD Domain Services).

Conclusion

In Part I of this article, you were introduced to the Microsoft Offline Virtual Machine Servicing Tool, Version 2.1 and how it can help you to resolve the problem of updating offline virtual machines stored in a VMM library. In Part II of the article, you will learn about OVMST 2.1 installation requirements, as well as obtain step-by-step procedures to install and configure OVMST 2.1, and configure and store target VMs as offline virtual machines in a VMM library. You will also learn how to create and monitor an OVMST 2.1 servicing job.

Posted in TUTORIALS, Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »