Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘storage devices’

Changes and improvements on Windows Server 8

Posted by Alin D on September 28, 2011

Windows Server 8 features a laundry list of new technologies, significant enhancements and much more. One of the most notable changes is how Microsoft is looking to embrace the cloud; Server 8 offers enhancements that are certain to make federation of cloud services much easier and also adds significant support for private and hybrid cloud technologies.

From a technical perspective, many of those enhancements tie directly into Microsoft’s virtualization platform, Hyper-V, which has become the basis for much of the technology foundation for Windows Server 8. Hyper-V has been re-engineered to offer more automation, easier provisioning and better isolation. What’s more, Microsoft’s new management paradigm brings simplicity to virtualization that was often absent from previous versions of Windows Server and Hyper-V.

Simplicity and ease of use/management is a key theme, demonstrated by how easy it is to move instances of virtualized operating systems across hosts, as well as create resiliency policies and provision IP addresses. Microsoft has embraced an ideology of “it just works” with Windows Server 8; one path the company has taken to reach that goal is to divorce the GUI management console from the server.

In other words, Windows Server 8 is designed to be managed from an administrator’s endpoint, and not on the server itself. That management methodology offers several advantages. For example, Microsoft was able to demonstrate the ability to concurrently manage multiple servers from the new server manager console. Since the console runs on an endpoint, it can be attached to several servers at once, eliminating the monolithic style of management used in the past.

What’s more, Microsoft is fully embracing the command line interface (CLI) with PowerShell, which sidesteps the management GUI altogether. PowerShell allows administrators to forgo the GUI and execute commands and scripts directly from a command prompt. In fact, PowerShell 3.0  has exploded from 300 cmdlets to more than 2,300 and is one of the core management engines of the OS. Most commands use plain English syntax and are backed by context-sensitive help that readily explains each command’s function and how to use it. Simply put, tasks that were once complicated to pull off using a GUI can now be accomplished in one or two simple steps.

The same ideology of simplicity that has been applied to virtualization and management is also a significant theme in how Microsoft has tackled storage. Here, storage has been reinvented to incorporate virtualization, as well as improved abstraction from the underlying hardware. In layman’s terms, that means most forms of storage, be it local, NAS, JBOD and so on, can be treated the same from the standpoint of the server. Microsoft has made that possible by creating two new paradigms for storage on Windows Server 8, Storage Pools and Storage Spaces.

Storage Pools and Storage Spaces offer ways to easily manage a huge array of storage devices, which often come in varying types and sizes. The secret sauce here consists of storage virtualization teamed with hardware abstraction and storage aggregation. Simply put, Storage Pools are units of storage aggregation that provide administration and isolation, while Storage Spaces give virtual disks performance, resiliency and simplify storage provisioning.

In practice, the technology offers the ability to create storage spaces that aggregate separate individual storage devices into a single unit of storage, and then provision and divvy up that storage space as needed. The obvious use for the technology is for virtual machines, which need flexible and elastic storage to meet demand. What’s more, the technology simplifies management of disparate storage types, while providing the ability to scale from the SMB up to large enterprises.

Speed, space, utilization and efficiency are the primary elements Microsoft stresses for its new take on server storage. One technical example that stresses all of those points is the inclusion of de-duplication technology. Microsoft’s Data Deduplication is designed to deal with the growing demand for physical storage, which seems to be increasing exponentially in the enterprise.

Microsoft’s stab at de-duplication works to reduce file storage sizes by removing duplicate data from the physical hard disk and then abstracting the requests to that data. Microsoft uses a straightforward approach to de-dupe files; take for example an environment where dozens of VHD (virtual hard disk) files are stored. Many of the files on those VHDs are identical copies of each other, such as .dlls, .exes and so on. Data de-duplication removes all the redundant copies of those files from all of the VHDs, save one. The redundant data is placed into a separate store in System Volume Information (SVI), and then a marker is created which points to the file that serves as the template. When used across thousands of files in a storage network, vast reductions in storage space should be expected.

Other improvements to the storage subsystem include enhancements to cluster shared volumes (CSV) and expansion beyond Hyper-V, Bitlocker support for shared cluster disks, cluster-aware updating, SMB2.2 storage support, and continuously available Hyper-V storage on remote SMB2.2 shares.

All told, Microsoft has evolved Windows Server into a network operating system that embraces the cloud and reduces the need for third-party solutions, such as virtualization, dedupe, storage management and so on. Time will only tell if Windows Server 8 will have the impact on the market that Microsoft is anticipating and if the technologies demonstrated during the pre-beta stage will actually make it into the shipping product, which may be a year away.

Posted in TUTORIALS | Tagged: , , , , , , | 1 Comment »

Six steps to configure SQL server on a SAN

Posted by Alin D on June 8, 2011

Storage area networks, (SANs), make it easy to connect massive amounts of expandable storage to a server. SANs are particularly useful for SQL Server installations: Enterprise databases don’t just require a great deal of storage; they also have continually-expanding storage needs. That said, you need to take some care when using SANs in clustered SQL Server environments. In this tip, I’ll give you some suggestions to keep in mind when setting up a SQL Server cluster on a SAN.

1. Get manufacturer-specific guidelines for tuning

SANs are not all built the same. Know your SAN before you hook it up and start populating it with data. For instance, you must understand how to prepare disks and what recommendations the manufacturer offers so they will work well in a clustered Windows Server environment. Check to see if the SAN you’re using has actually been tested in a clustered environment or not.

For instance, you will likely have to use the DISKPART.EXE utility (included in Windows 2003 Service Pack 1) to fine-tune disk-track alignment. Hewlett-Packard Co. is one company that provides with its storage devices detailed documentation about how to perform this kind of tuning for Windows Server 2003. (This is usually referred to as the “LUN offset” on SANs.)

2. Use RAID-10 whenever possible

This isn’t a cluster-specific piece of advice but it’s important nonetheless. If cost is less important than data integrity, use RAID-10 for your SAN, which is widely considered one of the best storage arrangements for databases although it comes at a higher cost.

For those not familiar with it, RAID-10 is “nested RAID,” or a RAID-0 array made from a set of RAID-1 arrays. It’s also been described as a stripe of mirrors. This is an extremely robust and efficient setup; RAID-10 is not just highly fault-tolerant, but it supports fast writing, too, which is critical in a database.

When you set up a RAID-10 system, put data and log files on

different sets of mirrored spindles to enhance both your speed and your recovery options. The more physical spindles you can spread your data out over and the more redundancy and parallelism you can get the better.

RAID-5 is also commonly recommended for databases, but RAID-5 is best on read-only volumes. RAID-10 is best in any scenario where disk activity has more than 10% writes, which is probably the vast majority of databases out there. For very large databases that grow into the terabytes, you could even consider RAID-100, which adds yet another level of nesting and striping (also called “plaid RAID”).

3. Active/active and active/passive considerations

An active/active (a/a) cluster should get a different disk arrangement than an active/passive (a/p) cluster. An a/a cluster has two nodes or servers, which are both active at the same time, balancing the load between them and mirroring each other’s updates. If one server goes offline, the other can pick up the slack as needed. An a/p arrangement has one server running continuously with the other server sitting idle. If the main server fails, only then does the backup server kick in.

With a/a clusters, each database server should get its own set of mirrored disk spindles; the two should not share the same logical drive for their databases. This is obviously more expensive, but if you want the best possible uptime, then the cost involved in adding the needed disks will be well worth it. Some database administrators go so far as to provide a dedicated SAN to each node of the cluster. However, if the amount of data replicating between nodes outweighs the amount of data going to and from clients, it might make more sense to keep the data for an a/a setup on the same SAN (albeit on different physical disks).

With a/p, you can easily have the database(s) sharing disks or SAN units. Since only one database server is active at any given time, there’s no contention going on.

4. Keep drive lettering consistent across clusters

This is one of the most cluster-specific pieces of advice to keep in mind. All host nodes in a cluster must see the same drives with the same drive letters, so plan your drive lettering cluster-wide. The clustering software controls who has access to a specific device, so you don’t need to worry about that; but each node must have a consistent view of the storage to be used.

5. Don’t try to move temporary databases around

The temporary databases used by SQL Server are part of the failover process and need to be available in a shared context. Don’t try to move them around. You may think you’re getting SAN bandwidth back by hosting temporary databases locally, but it’s not worth doing at the expense of basic functionality.

6. Do backups through mapped drives only

If you’re using a SAN to store SQL Server backups, those backups should be run through a mapped drive letter and not through a UNC name. Failover SQL Server clusters can only work through storage devices registered with the Cluster Service Cluster Administrator. This becomes doubly important if you have a failure and need access to SQL Server backups through a device shared on the cluster. Also remember to keep the advice in tip #4 in mind when mapping out drives for your backups.

 

Posted in SQL | Tagged: , , , , , , | 1 Comment »

Common Storage Configurations

Posted by Alin D on September 20, 2010

Introduction

In today’s world everything is on computers. More specifically, everything is stored on storage devices which are attached to computers in a number of configurations. There are many ways in which these devices can be accessed by users. Some are better than others and some are best for certain situations; in this article I will give an overview of some of these ways and describe some situations where one might want to implement them.

Firstly there is an architecture called Directly Attached Storage (DAS). This is what most people would think of when they think of storage devices. This type of architecture includes things like internal hard drives, external hard drives, and USB keys. Basically DAS refers to anything that attaches directly to a computer (or a server) without any network component (like a network switch) between them.


Figure 1: Three configurations for Direct Attached Storage solutions (Courtesy of ZDNetasia.com)

A DAS device can even accommodate multiple users concurrently accessing data. All that is required is that the device have multiple connection ports and the ability to support concurrent users. DAS configurations can also be used in large networks when they are attached to a server which allows multiple users to access the DAS devices. The only thing that DAS excludes is the presence of a network device between the storage device and the computer.

Many home users or small businesses require Network Attached Storage (NAS). NAS devices offer the convenience of centrally locating your storage devices, though not necessarily located with your computers. This feature is convenient for home users who may want to store their storage devices in their basement while roaming about their house with their laptop. This feature is equally appealing to small businesses where it may not be appropriate to have large storage devices where clients or customers present. DAS configurations could also provide this feature, though not as easily or elegantly for smaller implementations.


Figure 2: Diagram of a Network Attached Storage system (Courtesy of windowsnas.com)

A NAS device is basically a stripped down computer. Though they don’t have monitors or keyboards they do have stripped down operating systems which you can configure, usually by connecting to the device via a web browser from a networked computer. NAS operating systems are typically stripped down versions of UNIX operating systems, such as the open source FreeNAS which is a stripped down version of FreeBSD. FreeNAS supports many file formats such as CIFS, FTP, NFS, TFTP, AFP, RSYNC, and iSCSI. Since FreeNAS is open source you’re also free to add your own implementation of any protocol you wish. In a future article I will provide more in-depth information on these protocols; so stay tuned.

Because NAS devices handle the file system functions themselves, they do not need a server to handle these functions for them. Networks that employ DAS devices attached to a server will require the server to handle the file system functions. This is another advantage of NAS over DAS. NAS “frees up” the server to do other important processing tasks because a NAS device is connected directly to the network and handles all of the file serving itself. This also means that a NAS device can be simpler to configure and maintain for smaller implementations because they won’t require a dedicated server.

NAS systems commonly employ RAID configurations to offer users a robust storage solution. In this respect NAS devices can be used in a similar manner as DAS devices (for robust data backup). The biggest, and most important, difference between NAS systems and DAS systems are that NAS systems contain at least one networking device between the end users and the NAS device(s).

NAS solutions are similar to another storage configuration called Storage Area Networks (SAN). The biggest difference between a NAS system and a SAN system is that a NAS device handles the file system functions of an operating system while a SAN system provides only block-based storage services and leaves the file system functions to be performed by the client computer.

Of course, that’s not to say that NAS can’t be employed in conjunction with SAN. In fact, large networks often employ SAN with NAS and DAS to meet the diverse needs of their network users.

One advantage that SAN systems have over NAS systems is that NAS systems are not as readily scalable. SAN systems can quite easily add servers in a cluster to handle more users. NAS systems employed in networks where the networks are growing rapidly are often incapable of handling the increase in traffic, even if they can handle the storage capacity.

This doesn’t mean that NAS systems are scalable. You can in fact, cluster NAS devices in a similar manner to how one would cluster servers in a SAN system. Doing this still allows full file access from any node in the NAS cluster. But just because something can be done, doesn’t mean it should be done; if you’re thinking of going down this path tread carefully – I would recommend implementing a SAN solution instead.


Figure 3: Diagram of a Storage Area Network (Courtesy of anildesai.net)

However, NAS systems are typically less expensive than SAN systems and in recent years NAS manufacturers have concentrated on expanding their presence on home networks where many users have high storage demands for multimedia files. For most home users a less expensive NAS system which doesn’t require a server and rack space is a much more attractive solution when compared with implementing a SAN configuration.

SAN systems have many advantages over NAS systems. For instance, it is quite easy to replace a faulty server in a SAN system whereas is it much more difficult to replace a NAS device which may or may not be clustered with other NAS devices. It is also much easier to geographically distribute storage arrays within a SAN system. This type of geographic distribution is often desirable for networks wanting a disaster tolerant solution.

The biggest advantage of SAN systems is that they offer simplified management, scalability, flexibility, and improved data access and backup. For this reason SAN configurations are becoming quite common for large enterprises that take their data storage seriously.

Apart from large networks SAN configurations are not very common. One exception to this is is in the video editing industries which require a high capacity storage environment along with a high bandwidth for data access. A SAN configuration using Fibre Channel is really the best solution for video editing networks and networks in similar industries.

While any of these three configurations (DAS, NAS, and SAN) can address the needs of most networks, putting a little bit of thought into the network design can save a lot of future effort as the network grows or the need arises to upgrade various aspects of the network. Choosing the right configuration is important, you need to choose a configuration that meets your networks current needs and any predictable needs of the near to medium term future.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Common Storage Configurations