Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘storage solution’

Best practice to build an Hyper-V availability cluster

Posted by Alin D on June 21, 2011

As you might expect, the first thing IT managers notice about virtualization is the way it lowers costs. Systems administrators, however, are usually more interested in how it can save on downtime.

If you are considering Microsoft Hyper-V for your production environment, you’ll want to know how to take advantage of its high availability option. To that end, let’s take a look at some of the best practices for high availability with Hyper-V.

1. Build your cluster servers identically
Bringing high availability to Hyper-V means setting up a Windows Failover Cluster. This configuration requires at least two servers, and up to 16 nodes can participate in a single cluster. This is not the time to brush off a couple of old servers; you’ll want to take advantage of processor technology that brings forth fast virtual machine performance and components that allow you to know when a server is failing.

You also want to ensure these servers are built identically. From the processors, network adapters and memory down to the driver revisions and patch levels, you need to have hosts that are running the same for predictability. This will help ensure you can pass the tests put to your systems by the cluster setup wizard when you go to setup for high availability.

2. Understand the N+1 strategy
When setting up your cluster, think about how you plan on handling a potential failure. One option is to set up your cluster as a pure failover cluster and have half of your capacity at the ready to take over if a server fails. This is a classic setup, especially if you have two servers and are looking for availability from your configuration.

You could also setup an active cluster, which involves sharing the load between multiple machines. This configuration is fully supported with Windows Server 2008 R2. An active cluster requires that the host has enough resources to handle its normal load plus the load of the other host when machines failover to it.

Remember how important memory is. You may plan to run your virtual servers in a balanced scenario by spreading your VMs out amongst your hosts, but it only takes a hiccup of one component to cause all of your VMs to fail over. You’ll want enough memory and processor power to handle that load.

3. Have the right storage
Finding the right kind of storage is tricky. Many admins begin to deploy Hyper-V on single hosts with local storage, but the cluster setup requires you to run your virtual machines on shared storage. If you have a proper storage area network (SAN), only use it if it’s compatible with Windows 2008 R2 failover clustering. Not every SAN storage solution, including HBAs and various firmware revisions, are compatible with the latest version of Windows Server. Of course you can run the Validate a Configuration Wizard, but by then you may have devoted quite a bit of time to standing up a configuration that is incompatible.

If you need to use an iSCSI solution, it’s important to have enough I/O bandwidth to handle your virtual machines. You must have at least Gigabit speed on a dedicated network and you should utilize jumbo frames. If you are deploying I/O-intensive apps like Microsoft SQL Server, you’ll want to do some testing and verify that your storage solution can take the load along with so many other virtual machines. You should also always run dedicated NICs on dedicated switches for storage only, and don’t share your iSCSI bandwidth with your regular server traffic.

You can deploy your cluster storage using the Clustered Shared Volume (CSV) option introduced with Windows Server 2008 R2. This allows you to bypass the old requirement that one node owns your storage. The CSV option allows any node to access storage presented to the cluster and lets you store more than a single VM on a single LUN. It also allows the machines to failover independently, which was not the case with the first version of Hyper-V.

This doesn’t mean you should put scores of virtual machine files on a single LUN, however. When you place your virtual machines on storage, you should still consider I/O performance of a single LUN and know when it’s time to move machines to different storage. It’s also important that your storage is not presented to any node that is not part of the cluster. Finally, stay away from dynamic disks, as only basic disks are supported.

4. Use the right management tools
You should consider utilizing real Hyper-V management tools, even in a small cluster. For example, Microsoft System Center Virtual Machine Manager (SCVMM) Workgroup Edition will manage up to five nodes in a cluster at a discounted price of the standard edition. This gives you the advantage of health information and monitoring, physical to virtual machine conversions, and the ability to easily implement Live Migration. After all, why implement the cluster if you can’t easily take advantage of the features?

These are just a few tips to get you started on the road to hosting your Hyper-V virtual machines with the latest Windows clustering technology. Clustering is highly recommended when deploying Microsoft Hyper-V in production, and while its main benefit is availability, it will also help you sleep at night.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

Common Storage Configurations

Posted by Alin D on September 20, 2010

Introduction

In today’s world everything is on computers. More specifically, everything is stored on storage devices which are attached to computers in a number of configurations. There are many ways in which these devices can be accessed by users. Some are better than others and some are best for certain situations; in this article I will give an overview of some of these ways and describe some situations where one might want to implement them.

Firstly there is an architecture called Directly Attached Storage (DAS). This is what most people would think of when they think of storage devices. This type of architecture includes things like internal hard drives, external hard drives, and USB keys. Basically DAS refers to anything that attaches directly to a computer (or a server) without any network component (like a network switch) between them.


Figure 1: Three configurations for Direct Attached Storage solutions (Courtesy of ZDNetasia.com)

A DAS device can even accommodate multiple users concurrently accessing data. All that is required is that the device have multiple connection ports and the ability to support concurrent users. DAS configurations can also be used in large networks when they are attached to a server which allows multiple users to access the DAS devices. The only thing that DAS excludes is the presence of a network device between the storage device and the computer.

Many home users or small businesses require Network Attached Storage (NAS). NAS devices offer the convenience of centrally locating your storage devices, though not necessarily located with your computers. This feature is convenient for home users who may want to store their storage devices in their basement while roaming about their house with their laptop. This feature is equally appealing to small businesses where it may not be appropriate to have large storage devices where clients or customers present. DAS configurations could also provide this feature, though not as easily or elegantly for smaller implementations.


Figure 2: Diagram of a Network Attached Storage system (Courtesy of windowsnas.com)

A NAS device is basically a stripped down computer. Though they don’t have monitors or keyboards they do have stripped down operating systems which you can configure, usually by connecting to the device via a web browser from a networked computer. NAS operating systems are typically stripped down versions of UNIX operating systems, such as the open source FreeNAS which is a stripped down version of FreeBSD. FreeNAS supports many file formats such as CIFS, FTP, NFS, TFTP, AFP, RSYNC, and iSCSI. Since FreeNAS is open source you’re also free to add your own implementation of any protocol you wish. In a future article I will provide more in-depth information on these protocols; so stay tuned.

Because NAS devices handle the file system functions themselves, they do not need a server to handle these functions for them. Networks that employ DAS devices attached to a server will require the server to handle the file system functions. This is another advantage of NAS over DAS. NAS “frees up” the server to do other important processing tasks because a NAS device is connected directly to the network and handles all of the file serving itself. This also means that a NAS device can be simpler to configure and maintain for smaller implementations because they won’t require a dedicated server.

NAS systems commonly employ RAID configurations to offer users a robust storage solution. In this respect NAS devices can be used in a similar manner as DAS devices (for robust data backup). The biggest, and most important, difference between NAS systems and DAS systems are that NAS systems contain at least one networking device between the end users and the NAS device(s).

NAS solutions are similar to another storage configuration called Storage Area Networks (SAN). The biggest difference between a NAS system and a SAN system is that a NAS device handles the file system functions of an operating system while a SAN system provides only block-based storage services and leaves the file system functions to be performed by the client computer.

Of course, that’s not to say that NAS can’t be employed in conjunction with SAN. In fact, large networks often employ SAN with NAS and DAS to meet the diverse needs of their network users.

One advantage that SAN systems have over NAS systems is that NAS systems are not as readily scalable. SAN systems can quite easily add servers in a cluster to handle more users. NAS systems employed in networks where the networks are growing rapidly are often incapable of handling the increase in traffic, even if they can handle the storage capacity.

This doesn’t mean that NAS systems are scalable. You can in fact, cluster NAS devices in a similar manner to how one would cluster servers in a SAN system. Doing this still allows full file access from any node in the NAS cluster. But just because something can be done, doesn’t mean it should be done; if you’re thinking of going down this path tread carefully – I would recommend implementing a SAN solution instead.


Figure 3: Diagram of a Storage Area Network (Courtesy of anildesai.net)

However, NAS systems are typically less expensive than SAN systems and in recent years NAS manufacturers have concentrated on expanding their presence on home networks where many users have high storage demands for multimedia files. For most home users a less expensive NAS system which doesn’t require a server and rack space is a much more attractive solution when compared with implementing a SAN configuration.

SAN systems have many advantages over NAS systems. For instance, it is quite easy to replace a faulty server in a SAN system whereas is it much more difficult to replace a NAS device which may or may not be clustered with other NAS devices. It is also much easier to geographically distribute storage arrays within a SAN system. This type of geographic distribution is often desirable for networks wanting a disaster tolerant solution.

The biggest advantage of SAN systems is that they offer simplified management, scalability, flexibility, and improved data access and backup. For this reason SAN configurations are becoming quite common for large enterprises that take their data storage seriously.

Apart from large networks SAN configurations are not very common. One exception to this is is in the video editing industries which require a high capacity storage environment along with a high bandwidth for data access. A SAN configuration using Fibre Channel is really the best solution for video editing networks and networks in similar industries.

While any of these three configurations (DAS, NAS, and SAN) can address the needs of most networks, putting a little bit of thought into the network design can save a lot of future effort as the network grows or the need arises to upgrade various aspects of the network. Choosing the right configuration is important, you need to choose a configuration that meets your networks current needs and any predictable needs of the near to medium term future.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Common Storage Configurations