Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘SCSI’

Hardware requirements for Exchange 2010 Virtualization

Posted by Alin D on August 2, 2011

When virtualizing Exchange Server 2010, administrators must provision servers with adequate resources, while at the same time adhering to Microsoft’s requirements. It’s something of an art form.

Hardware requirements only scratch the surface. Virtualizing Exchange Server 2010 also requires attention to storage requirements, memory considerations and fault tolerance.

Hardware requirements for virtualizing Exchange

The hardware requirements for Exchange Server 2010 vary greatly depending on the server roles installed on the anticipated workload. As such, the minimum hardware requirements on Microsoft’s website are often completely inadequate.

Determining the hardware requirements for most Exchange Server roles is fairly straightforward. There are numerous TechNet articles that can guide you through the planning process. Hardware planning for the mailbox server role is the most difficult, especially in large deployments. The best way to accurately estimate the role’s hardware requirements is to use the Exchange 2010 Mailbox Server Role Requirements Calculator.

As you plan hardware allocation for virtualized Exchange servers, it is important to remember that Microsoft does not differentiate between physical and virtual deployments of Exchange Server 2010. The hardware requirements are the same regardless of whether you run Exchange on physical hardware or in virtual machines (VMs).

Storage requirements for virtualizing Exchange
Storage requirements are also a consideration when virtualizing Exchange Server 2010. You can configure Exchange Server 2010 to use virtual hard disks that reside locally on the server, or you can connect to a remote storage mechanism through iSCSI. Exchange Server 2010 also supports SCSI pass-through disks.

If you use virtual hard drives for Exchange Server storage, they must be a fixed size. Microsoft does not support dynamically expanding virtual hard drives with Exchange Server. (This isn’t really an issue with VMware, because VMware uses fixed-length virtual hard disks by default.)

If you plan to use iSCSI connectivity, make sure the virtual Exchange server uses a full-featured network stack. If the server does not support the use of jumbo frames, for example, then storage performance may be greatly diminished.

Memory considerations when virtualizing Exchange

xchange has always been an I/O intensive application, but one of Microsoft’s major design goals in Exchange Server 2010 was to drive down the I/O requirements. With fewer I/O requirements, it is more practical to virtualize Exchange mailbox servers. But in order to achieve decreased I/O levels, Exchange 2010 is designed to use large amounts of memory. That means efficient memory use is critical to the server’s performance.

You should avoid using memory overcommit or Dynamic Memory when virtualizing Exchange Server 2010. Memory overcommit works well for servers that occasionally need extra memory to perform an operation and release that memory when the task is complete. Exchange, however, is a memory-hungry application, and the Mailbox Server role usually consumes all the memory that it can.

No snapshots allowed

One of the most important things to know about virtualizing Exchange Server 2010 is that it doesn’t support VM snapshots. Snapshots make a point-in-time copy of the VM to be used for backup and recovery.

In Exchange Server 2010, the Mailbox Server, Hub Transport Server, and Edge Transport Server roles all use highly transactional internal databases. Using snapshots to revert a virtualized Exchange server to a previous state could therefore have disastrous consequences, especially in organizations that use multiple Exchange servers

Fault tolerance

Before the release of Service Pack 1 (SP1), Microsoft didn’t support combining Exchange Server database availability groups with hypervisor-level clusters. Exchange Server 2010 SP1 supports this configuration, but with some limitations. VMs must be configured so that their state is never saved or restored when the VM is taken offline or when a failover occurs.

With all the hardware requirements and resource-allocation considerations, virtualizing Exchange Server 2010 can be a juggling act. VMware (PDF) and Microsoft have released best practices for virtualizing Exchange Server 2010 on their hardware platforms.

 

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

What you need to know about SQL server Virtualization

Posted by Alin D on July 14, 2011

It is hard to argue with virtualization. Few technologies have had such a sudden and profound impact on the way businesses run their IT operations, saving them money and manpower, all with scant glitches or snafus. But when it comes to databases such as SQL Server, analysts warn there may be a few “gotchas” — namely, SQL Server virtualization risks — lurking out there.

One note of caution came from Peter O’Kelly, principal analyst at O’Kelly Associates. O’Kelly said there are “waves,” or trends, in the IT industry, and the current wave holds that virtualization is supposed to be good for everything. “Now, industry is discovering that there are some places where you may want to dial that back a bit,” he said. “It is probably something that needs to be assessed on a case-by-case basis.”

Virtualization might not always be a good thing for databases in general because it may interfere with the heuristics of the database management system for data access optimization, which is designed to work directly with data storage devices.

“Adding virtual storage may result in more disk access operations, and since disk access is measured in milliseconds while memory access [e.g., for cached data] is measured in nanoseconds, the consequences can be significant,” O’Kelly said. “The heuristics will break, the optimizer won’t do everything it is expected to do, and that will create a problem.”

Chris Wolf, an analyst at Gartner Inc., agreed that memory can be an Achilles’ heel for databases in virtualized environments. “Historically, people have run into issues involving memory management,” he said.

For instance, a few years ago hypervisors were using software to emulate physical memory. And, as noted by O’Kelly, when you try to emulate memory in software you run into bottlenecks and end up with slower response times.

However, starting in the second half of 2009, AMD began to introduce AMD-V Rapid Virtualization Indexing on hardware, and Intel early last year released Extended Page Tables. According to Wolf, these developments allow virtual machines to manage their own physical page tables in memory. That removes the software bottleneck. “So a few years ago people virtualizing SQL Server might have said it doesn’t run well, but with the right architecture today, it isn’t a problem,” Wolf said.

Another potential rough spot for SQL Server virtualization involves memory appetite. “SQL will take as much memory as you will give it, and that will cause problems with resources sharing,” Wolf said. “That’s why on a physical server, people must tune it to use as much memory as it needs, not as much as it wants.”

Fortunately, Wolf said, the tuning is straightforward. So, he advises that infrastructure people and SQL Server administration teams make a point of talking about the issue and resolving it.

The same can be said for making sure I/O is optimized. As an example, Wolf cited vSphere 2.1, in which VMware introduced Paravirtual SCSI. “It is a new storage driver to provide accelerated I/O to access storage, and they introduced a new feature — storage I/O control — which lets you prioritize storage access for certain applications, so one app won’t take over all of the I/O,” he explained.

Similar tuning issues concerned Greg Shields, an IT analyst at consulting firm Concentrated Technology.  Now, although almost anything can be virtualized, Shields said implementation can still present challenges.

However, he stressed, the power of virtualization is such that the “overhead” of virtualizing is now so minimal that performance is “almost native” anyway. And that’s bound to be good news for those running SQL Server.

“Today, with the right architecture, there is no reason you can’t run a SQL Server workload in a virtual machine environment,” Wolf said. “We have had many of our customers doing this with large-scale databases. Our position is that virtualization should be the default platform for all your apps in an x86 environment. The onus should be on the owner to show why it isn’t good rather than on IT to show why it is needed.”

Posted in SQL | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »

Five tips to Virtualize Microsoft Exchange Server 2007

Posted by Alin D on May 26, 2011

As we noted in our introduction to this guide on virtualizing mission-critical applications, IT shops have begun to see the virtues of virtualizing resource-intensive applications. So can you virtualize Microsoft Exchange servers? Should you? Microsoft Exchange Server virtualization raises major questions in the worlds of virtualization and email administrators today. No matter which virtualization hypervisor you use, does Exchange make a good candidate for virtualization?

our immediate answer to this question should be, “If virtulization makes sense in terms of performance.” Above all else, if you can eke an acceptable amount of performance after you virtualize Microsoft Exchange, the intrinsic benefits of virtualizing (backups, disaster recovery and so on.) greatly outweigh the costs and added complexity.

With that statement in mind, however, consider the following five tips as important to consider before you virtualize Microsoft Exchange:

Tip no. 1: Ensure supportability
Microsoft’s support of Exchange in a virtualized environment holds true only when fairly specific criteria are met. Before considering any new virtual Exchange servers, ensure that you meet the following minimums:

 

  • The hardware virtualization software used is Windows Server 2008 Hyper-V, Microsoft Hyper-V Server, or any third-party hypervisor that has been validated under the Windows Server Validation Program.
  • The Exchange Server virtual machine runs Exchange Server 2007 SP1 or later, is deployed on Windows Server 2008, does not have the Unified Messaging server role installed, and meets the minimum hardware requirements for Exchange.
  • The Exchange Server virtual machine uses fixed-size storage, SCSI pass-through storage or iSCSI storage. Microsoft does not support dynamically expanding and differencing disks for Exchange use.

Tip no. 2: Consider server consolidation ratios of 1:1
For many environments, the primary goal of virtualization is to consolidate as many virtual machines onto as many virtual hosts as possible. Squeezing 20 virtual machines onto a host (as opposed to 10) uses fewer physical servers to power your infrastructure, but it also means that more virtual machines vie for resource attention. That resource contention isn’t a good thing for Exchange.

 

One of the benefits of Microsoft’s Hyper-V solution is that it costs you nothing over and above the OS license you already have. This means that virtualizing an Exchange server can be done with your existing licenses. With that in mind, consider 1:1 as a consolidation ratio for your Exchange servers, where you have one virtual machine per physical host. By consolidating in this manner, you eliminate the possibility of resource contention while retaining the benefits of virtualization for your Exchange Servers.

Tip no. 3: Select a failover mechanism, but only one
Microsoft supports the use of Exchange Cluster Continuous Replication (CCR) as well as Single Copy Clusters (SCC), Local Continuous Replication (LCR), and Standby Continuous Replication (SCR) in virtualized Exchange environments. It also supports the use of hypervisor-based failover solutions such as Microsoft Live Migration or VMware VMotion. But it does not support using both of these technologies simultaneously. If you elect to virtualize your Exchange environment, choose one of these options for failover, but only one.

 

Tip no. 4: Never snapshot Exchange
Snapshots are a great technology for creating an instant backup of an Exchange server; but these snapshots are not application-aware. While taking a snapshot and immediately reverting to that snapshot might appear to work flawlessly, there can be unintended consequences and potential data loss when snapshots are allowed to remain over a period of time. This is because of the state-based nature of Exchange data. As a result, Microsoft does not support the use of snapshots for Exchange servers.

Tip no. 5: Be conscious of storage space

Remember that Hyper-V Virtual Hard Disk files cannot exceed 2,040 GB, which amounts to a size just short of 2 TB. While this might seem like a large amount of space, Exchange data stores can grow exceptionally large when unmonitored. Environments can circumvent this limitation by using iSCSI within a virtual machine to connect to data stores or by exploiting pass-through disks. These alternate approaches do not have the same limitations on disk space.

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

Step by step Data Protection Manager 2010 installation

Posted by Alin D on February 3, 2011

Probably a lot of you are already aware of Microsoft Data Protection Manager. It is Microsoft’s enterprise backup product for backing up the following:

Picture2

  It is really a product that can backup mentioned products in best way there is, because Data Protection Manager product group at Microsoft works closely with other product groups to get the needed knowledge for recommended backup procedures. Anyway, the point of this blog post is not to describe benefits of DPM 2010, but rather to describe installation in easiest possible way. Because, if you are reading this blog post i suppose you are already somehow familiar with capabilities that DPM has.

  In this blog post you can find easy step by step procedures with pictures on installing DPM 2010. I did all procedures below on one notebook on which Microsoft Windows Server 2008 R2 with Hyper-V role is installed. On that notebook I created one Windows Server 2008 R2 virtual machine on which DPM 2010 is installed. So if you’d like to try to install DPM at least you should have:

· 1 machine for DPM 2010 (It can be virtual or physical machine),

· 1 disk for DPM installation and 1 disk for DPM storage pool (Storage Pool is place where DPM stores backup data. It must be clean partition, and no it can’t be any USB disk. If you are using virtual machine you can add VHD as you will see later. ),

· Microsoft .NET Framework 3.5 with Service Pack 1

· On machine where you are installing DPM following operating systems are supported:

     – Windows Server 2008 64 bit

     – Windows Server 2008 R2 64 bit

· DPM installation which you can download on MSDN,

· Domain (DPM will not install if machine where you are installing it is not joined to domain. If you are using computer with Hyper-V you can configure domain on physical host, or in another virtual machine).

If you covered all above you can start installing. In DPM 2010 installation procedure is pretty straightforward and all prerequisites will be installed using DPM setup. In previous versions you needed to install IIS role, and Single Instance Storage by yourself. Now in DPM 2010 IIS role is not needed anymore and Single Instance Storage is installed as part of your DPM setup.

1. First step is of course Welcome part. Click Next.

clip_image005[4]

2. Now there’s prerequisites check. First it will check for basic components, required hardware and then System Attributes (e.g. Single Instance Storage etc…). As you can see on following picure Single Instance Storage is not installed on my computer so I clicked Next , for this to be installed as part of the Setup. If you were wondering, DPM is using Single Instance Storage (SIS) for optimizing storage space.

clip_image007[3]

clip_image009[3]

3. After Setup installs Single Instance Storage, you should restart your computer.

clip_image011[3]

clip_image013[3]

4. Manually start installer again and you should see that now everything is ok with prerequisites. Click Next.

clip_image015[3]

5. Enter your Name and Company. Click Next.

clip_image017[3]

6. Now you can chose whether to install SQL 2008 as part of Setup, or use your existing instance of SQL Server. If installing SQL 2008 as part of DPM 2010 Setup, everything will be installed automatically.

clip_image019[3]

7. Enter Password for local user accounts that DPM creates automatically.

clip_image021[3]

8. Choose if you’d like to use Microsoft Updates or not. Click Next.

clip_image023[3]

9. Choose if you’d like to participate in Customer Experience Improvement Program. Click Next.

clip_image025[3]

10. Review settings. If everything is ok then click Install.

clip_image027[3]

11. Setup will first install SQL. It will last for about 20 minutes

clip_image029[3]

12. After SQL is finished installing, DPM setup will proceed into installing Data Protection Manager 2010. It will last for approximately 10 minutes or less.

clip_image031[3]

13. If everything is fine, you will see 3 green check marks. Click Close.

clip_image033[3]

14. On desktop you will see two icons; one for DPM administrators console and other for DPM Management Shell. Double click on DPM administrators console shortcut and DPM will open as shown on following picture.

clip_image035[3]

15. Now first thing you should do is to add disk to DPM storage pool. You can do that in management tab of DPM console.

But before that let’s see how to add disk to OS if you are using DPM in Hyper-V virtual machine.

clip_image037[3]

16. Click on Settings for DPM virtual machine. Following window will open. Click on SCSI controller and then click Add to add Hard Drive.

clip_image039[3]

17. Now click on that hard drive as shown in following picture. Create new virtual hard disk (.vhd) file by clicking New button. After clicking New, choose type of disk. In my case I have chosen fixed.

clip_image041[3]

18. Window opens as shown here. Enter a name for new vhd and choose location where you’d like it to be stored. Click Next.

clip_image043[3]

19. Choose a size of a new blank virtual hard disk. Click Next.

clip_image045[3]

20. Click Finish. Considering that this is Fixed disk, you will have to wait couple of minutes while disk is being created. Of course duration can depend on size of your newly created virtual hard disk.

clip_image047[3]

21. Click Ok to close Settings for DPM virtual machine.

clip_image049[3]

22. Now click on Windows Start > Right Click on Computer > Choose Manage. The following window will open where you can navigate to Storage>Disk Management. You will see your newly created disk in black color. Right Click where is says „ Disk 1, Unknown, 10 GB, Offline“ and choose Online.

clip_image051[3]

23. Navigate back to DPM console, onto Management tab and onto Disk sub-tab. On right side of console click Rescan. After that also on right side of console click Add. Window as shown below will open. Select the disk and click Add. Click OK.

clip_image053[3]

24. Disk is now shown as in following picture.

clip_image055[3]

25. That’s it, if you followed all steps, you should have DPM 2010 successfully installed and configured with one disk as Storage Pool. Of course before you can start backing you should install management agents on desired computers.

clip_image057[1]

  Described procedure for Installing DPM and configuring Storage Pool can be very useful for demo scenarios. For production environments you should carefully choose DPM computer configuration and plan disk storage pool.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Availability and recovery options when running Exchange 2010 in a virtual environment

Posted by Alin D on September 27, 2010

Virtual servers can benefit an organization’s data crunching needs in many ways. One of them is leveraging their native benefits to broaden the availability and recovery options for Microsoft Exchange 2010 deployments.

Most administrators can cite the benefits of virtual machines by rote:

* They’re portable so Exchange need no longer be bound to a particular piece of hardware. That means design decisions don’t need to be permanent. CPU and memory requirements can be changed with a reconfiguration and reboot. What’s more, new hardware can be be more easily accommodated because the virtual machine containing Exchange can be simply transferred to the new machine.

* They’re hardware independent so planners have greater design flexibility putting together the production as well as the disaster recovery components of a system.

Some virtual machine vendors, like VMware, have included robust availability features into their software. For example, the company’s High Availability product can act as a first line of defense against server failure. If a physical server or any critical component in a server goes down or fails, HA will automatically reboot the Exchange virtual machine on another physical server.

Another VMware product, Distributed Resource Scheduler, is designed to automatically manage workloads for virtual machines on a network. Better management of demand on a network means less latency and happier users. For example, if a virtual machine becomes bottlenecked, DRS can automatically move it to another host with more resources. Better yet, it can do that without subjecting the system to downtime.

The product can also speed recovery from hardware failures. For instance, after HA addresses a breakdown in a physical server by moving an Exchange virtual machine to another physical server, it’s DRS that migrates the Exchange VM back to its original home after it’s fixed, once again without downtime or any hiccups to the system’s users.

Running Exchange in a virtual environment can increase the availability of the program across its lifecycle. Virtualized Exchange can easily recover from planned or unplanned hardware outages, from hardware degradation by better load management and from application failure by using Microsoft Cluster Service   within a virtual machine.

In addition, the architecture of virtual machines has multi-pathing capabilities and advanced queueing techniques that can be leveraged in a virtual Exchange environment to improve network performance. For instance, they can be used to increase IOPS transactions, which will allow more clients to be served. Those technologies can also be used to balance the workloads of multiple Exchange servers that are sharing the same physical server to use multiple SAN paths and storage processor ports.

An added bonus to locating Exchange on a Virtual Machine File Server is avoidance of SAN errors. That’s because the VMFS hides SAN errors from guest operating systems.

Upgrades can be a bear in Exchange environments. Not only are they complicated to perform, but they can produce downtime which doesn’t produce happy faces in an organization.

A typical upgrade involves allocating engineering resources–including application, server and SAN administration–for planning and implementation, sizing and acquisition of new hardware and, of course, the downtime to perform the upgrade.

Compare that to an upgrade in a virtual environment. Scaling up your Exchange environment, for instance, is as easy as adding more Exchange virtual machines as your client base grows.

When Exchange is running on a physical server it’s tightly bound to a storage technology and can be very challenging to scale. Adding more storage to an Exchange virtual machine, however, can be easier. VMware’s vSphere software, for example, treats the new storage as a simple SCSI device. That means regardless of the storage technology–SCSI or Fibre Channel–the Exchange environment can be upgraded without a sneeze.

Changing the storage capacity for Exchange when it’s running on a physical server can be difficult, too. Not so in the virtual environment. With VMware’s Virtual Machine File System, for instance, storage capacities to Exchange virtual machines can be changed on the fly with its hot add/remove storage feature.

As VMware notes in a recent white paper on availability and recovery options when running Exchange 2010 in a virtual environment: “Although application-level clustering has been the prevalent solution for most Exchange implementations, features of the vSphere platform can enhance the overall availability of Exchange by providing options
that help to limit both planned and unplanned downtime.”

“In fact,” the company added, “for many organizations, the features provided by vSphere may satisfy the availability requirements of their business without needing to follow traditional clustering approaches.”

As for organizations with high availability requirements, VMware notes, “application-level clustering can be combined with the vSphere features to create an extremely flexible environment, with options for failover and recovery at both the hardware and application levels.”

Posted in Exchange | Tagged: , , , , , , , , , , , , , , , , , | Leave a Comment »

Optimizing Hyper-V performance: Advanced fine-tuning

Posted by Alin D on September 8, 2010

Check out the following optimization guidelines for Hyper-V…

Hyper-V Integration Services
Let’s start with a simple, common sense practice: Ensure that you use the latest version of Hyper-V’s integration services. This simple setup program installs the latest available drivers for supported guest OSes (and some that are not officially supported). The result is improved performance when VMs make calls to hardware. This should generally be the first thing one does after installing a guest OS. Keep in mind that updated versions of integration services might be released to improve performance between major releases of Hyper-V.

Use synthetic network drivers
Hyper-V supports two types of virtual network drivers: emulated and synthetic. Emulated drivers provide the highest level of compatibility. Synthetic drivers are far more efficient, because they use a dedicated VMBus to communicate between the virtual network interface card (NIC) and the root/parent partitions physical NIC. To verify which drivers are used from within a Windows guest OS, you can use Device Manager.

The type of network adapter installed can be changed by adjusting the properties of the VM. For changes to take effect, in some cases a VM will need to be shut down or rebooted. The payoff is usually worth it, though: If synthetic drivers are compatible, you’ll likely see lower CPU utilization and lower network latency.

Increasing network capacity
Network performance is important for various types of applications and services. Whether running one or a few VMs, you can often get by with just a single physical NIC on the host server. But if many VMs compete for resources and a physical network-layer security is to be implemented, consider adding multiple gigabit Ethernet NICs on the host server. Also, NICs that support features such as TCP offloading can improve performance by managing overhead at the network interface level. Just be sure that this feature is enabled in an adapter’s drivers in the root/parent partition.

Another key is, whenever possible, to segregate VMs onto separate virtual switches. Each virtual switch can be bound to a different physical NIC port on the host, allowing for compartmentalization of VMs for security and performance reasons. VLAN tagging can also be used to segregate traffic for different groups of VMs that use the same virtual switch.

Minimize OS overhead
A potential drawback of running a full operating system on virtual host servers comes in the form of OS overhead. You can deploy Hyper-V in a minimal, slimmed-down version of Windows Server 2008 by using the Server Core installation option. This configuration lacks the standard administrative tools, but it also avoids a lot of OS overhead. It also lowers the security “surface area” of the server and removes many services and processes that might compete for resources. It’s really a stripped-down version of the Windows OS that’s optimized for specific tasks. You’ll need to use remote management tools from another Windows machine to manage Hyper-V, but the performance benefits often make it worth the effort.

Virtual CPUs and multiprocessor cores
Hyper-V supports up to four virtual CPUs for Windows Server 2008 guest OSes and up to two virtual CPUs for various other supported OSes. That raises the question: When should you use this feature? Many applications and services are designed to run in a single-threaded manner. This leads to the common issue of seeing two CPUs on a server both running at 50% utilization when a single application is cranking. From the level of the guest OS and the hypervisor itself, spreading CPU calls across processor cores can be expensive and complicated. The bottom line is that you should use multiple virtual CPUs only for those VMs that have applications and services that can benefit from them.

Memory matters
A rule of thumb is to allocate as much memory to a VM as you would for the same workload running on a physical machine; but that doesn’t mean that you should waste physical memory. If you have a good idea of how much RAM is required for running a guest OS and all of the applications and services the workload requires, start there. You should also add a small amount of additional memory for overhead related to virtualization (an additional 64 MB is usually plenty.)

A lack of available memory can create numerous problems, such as excessive paging within a guest OS. This latter issue can be confusing, because it might initially seem as though the problem is disk I/O performance. The root cause is often because too little memory has been assigned to the VM. It’s important to monitor the needs of your applications and services, which is most easily done from within a VM, before you make sweeping changes throughout a data center.

SCSI and disk performance
Disk I/O performance is a common bottleneck for many types of VMs. You can choose to attach virtual hard disks (VHDs) to a Hyper-V VM using either a virtual integrated development environment (IDE) or SCSI controllers. IDE controllers are the default because they provide the highest level of compatibility for a broad range of guest OSes. But SCSI controllers can reduce CPU overhead and enable a virtual SCSI bus to provide multiple transactions simultaneously. If your workload is disk-intensive, consider using only virtual SCSI controllers if the guest OS supports that configuration. If that’s not possible, add additional SCSI-connected VHDs (preferably ones that are stored on separate physical spindles or arrays on the host server)

Posted in TUTORIALS, Windows 2008 | Tagged: , , , , , , , , , , , , | Leave a Comment »

Support Policies and Recommendations for Exchange Servers in Hardware Virtualization Environments

Posted by Alin D on September 8, 2010

Microsoft has published an interesting article about their official support for installing Exchange on Hyper-VVirtual Server 2005 virtual machines.

To make a long story short, here are the support policies for both Exchange 2007 and Exchange 2003:

Support Policy and Recommendations for Exchange Server 2007

Microsoft supports Exchange Server 2007 in production on hardware virtualization software only when all the following conditions are true:

  • The hardware virtualization software is Windows Server 2008 with Hyper-V technology, Microsoft Hyper-V Server, or any third-party hypervisor that has been validated under the Windows Server Virtualization Validation Program.
  • The Exchange Server guest virtual machine:
    • Is running Microsoft Exchange Server 2007 with Service Pack 1 (SP1) or later.
    • Is deployed on the Windows Server 2008 operating system.
    • Does not have the Unified Messaging server role installed. All Exchange 2007 server roles, except for the Unified Messaging role, are supported in a virtualization environment.
  • The storage used by the Exchange Server guest machine can be virtual storage of a fixed size (for example, fixed virtual hard drives (VHDs) in a Hyper-V environment), SCSI pass-through storage, or Internet SCSI (iSCSI) storage. Pass-through storage is storage that is configured at the host level and dedicated to one guest machine.Note:

    In a Hyper-V environment, each fixed VHD must be less than 2,040 gigabytes (GB). For supported third-party hypervisors, check with the manufacturer to see if any disk size limitations exist.

    • Virtual disks that dynamically expand are not supported by Exchange.
    • Virtual disks that use differencing or delta mechanisms (such as Hyper-V’s differencing VHDs or snapshots) are not supported.
  • No other server-based applications, other than management software (for example, antivirus software, backup software, virtual machine management software, etc.) can be deployed on the physical root machine. The root machine should be dedicated to running guest virtual machines.
  • Microsoft does not support combining Exchange clustering solutions (namely, cluster continuous replication (CCR) and single copy clusters (SCC)) with hypervisor-based availability or migration solutions (for example, Hyper-V’s quick migration). Both CCR and SCC are supported in hardware virtualization environments provided that the virtualization environment does not employ clustered virtualization servers.
  • Some hypervisors include features for taking snapshots of virtual machines. Virtual machine snapshots capture the state of a virtual machine while it is running. This feature enables you to take multiple snapshots of a virtual machine and then revert the virtual machine to any of the previous states by applying a snapshot to the virtual machine. However, virtual machine snapshots are not application-aware, and using them can have unintended and unexpected consequences for a server application that maintains state data, such as Exchange Server. As a result, making virtual machine snapshots of an Exchange guest virtual machine is not supported.
  • Many hardware virtualization products allow you to specify the number of virtual processors that should be allocated to each guest virtual machine. The virtual processors located in the guest virtual machine share a fixed number of logical processors in the physical system. Exchange supports a virtual processor-to-logical processor ratio no greater than 2:1. For example, a dual processor system using quad core processors contains a total of 8 logical processors in the host system. On a system with this configuration, do not allocate more than a total of 16 virtual processors to all guest virtual machines combined.

Support Policy and Recommendations for Exchange Server 2003

Microsoft supports Exchange Server 2003 in production on hardware virtualization software (virtual machines) only when all the following conditions are true:

  • The hardware virtualization software is Microsoft Virtual Server 2005 R2 or any later version of Microsoft Virtual Server.
  • The version of Exchange Server that is running on the virtual machine is Microsoft Exchange Server 2003 with Service Pack 2 (SP2) or later.
  • The Microsoft Virtual Server 2005 R2 Virtual Machine Additions are installed on the guest operating system.
  • Exchange Server 2003 is configured as a stand-alone server and not as part of a Windows failover cluster.
  • The SCSI driver that is installed on the guest operating system is the Microsoft Virtual Machine PCI SCSI Controller driver.
  • The virtual hard disk Undo feature is not enabled for the Exchange virtual machine.Note:

    When a Microsoft Virtual Server SCSI adaptor is added to a virtual machine after the Virtual Machine Additions have been installed, the guest operating system detects and installs a generic Adaptec SCSI driver. In this case, the Virtual Machine Additions must be removed and then reinstalled for the correct SCSI driver to be installed on the guest operating system.

Rest of the recommendations are at Microsoft website.

Posted in Exchange | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Installing SQL Server 2008 on a Windows Server 2008 Cluster – Part1

Posted by Alin D on August 11, 2010

There have been a lot of changes regarding clustering between Windows Server 2003 and Windows Server 2008. It took quite a lot of effort for us to build a cluster in Windows Server 2003 – from making sure that the server hardware for all nodes are cluster-compatible to creating resource groups. Microsoft has redefined clustering with Windows Server 2008, making it simpler and easier to implement. Now that both SQL Server 2008 and Windows Server 2008 are out in the market for quite some time, it would be a must to prepare ourselves to be able to setup and deploy a clustered environment running both. Installing SQL Server on a stand-alone server or member server in the domain is pretty straight-forward. Dealing with clustering is a totally different story. The goal of this series of tips is to be able to help DBAs who may be charged with installing SQL Server on a Windows Server 2008 cluster.

Prepare the cluster nodes

I will be working on a 2-node cluster throughout the series and you can extend it by adding nodes later on. You can do these steps on a physical hardware or a virtual environment. I opted to do this on a virtual environment running VMWare. To start with, download and install a copy of the evaluation version of Windows Server 2008 Enterprise Edition. This is pretty straight-forward and does not even require any product key or activation. Evaluation period runs for 60 days and can be extended up to 240 days so you have more than enough time to play around with it. Just make sure that you select at least the Enterprise Edition during the installation process and have at least 12GB of disk space for your local disks. This is to make sure you have enough space for both Windows Server 2008 and the binaries for SQL Server 2008. A key thing to note here is that you should already have a domain on which to join these servers and that both have at least 2 network cards – one for the public network and the other for the heartbeat. Although you can run a cluster with a single network card, it isn’t recommend at all. I’ll lay out the details of the network configuration as we go along. After the installation, my recommendation is to immediately install .NET Framework 3.5 with Service Pack 1 and Windows Installer 4.5 (the one for Windows Server 2008 x86 is named Windows6.0-KB942288-v2-x86.msu). These two are prerequisites for SQL Server 2008 and would speed up the installation process later on.

Carve out your shared disks

We had a lot of challenges in Windows Server 2003 when it comes to shared disks that we will use for our clusters. For one, the 2TB limit which has a lot to do with the master boot record (MBR) has been overcome by having the GUID Partition Table (GPT) support in Windows Server 2008. This allows you to have 16 Exabytes for a partition. Another has been the use of directly attached SCSI storage. This is no longer supported for Failover Clustering in Windows Server 2008. The only supported ones will be Serially Attached Storage (SAS), Fiber Channel and iSCSI. For this example, we will be using an iSCSI storage with the help of an iSCSI Software Initiator to connect to a software-based target. I am using StarWind’s iSCSI SAN to emulate a disk image that my cluster will use as shared disks. In preparation for running SQL Server 2008 on this cluster, I recommend creating at least 4 disks – one for the quorum disk, one for MSDTC, one for the SQL Server system databases and one for the user databases. Your quorum and MSDTC disks can be as small as 1GB, although Microsoft TechNet specifies a 512MB minimum for the quorum disk. If you decide to use iSCSI as your shared storage in a production environment, a dedicated network should be used so as to isolate it from all other network traffic. This also means having a dedicated network card on your cluster nodes to access the iSCSI storage.

Present your shared disks to the cluster nodes

Windows Server 2008 comes with iSCSI Initiator software that enables connection of a Windows host to an external iSCSI storage array using network adapters. This differs from previous versions of Microsoft Windows where you need to download and install this software prior to connecting to an iSCSI storage. You can launch the tool from Administrative Tools and select iSCSI Initiator.

To connect to the iSCSI target:

  1. In the iSCSI Initiator Properties page, click on the Discovery tab.
  2. Under the Target Portals section, click on the Add Portal button.
  3. In the Add Target Portal dialog, enter the DNS name or IP address of your iSCSI Target and click OK. If you are hosting the target on another Windows host as an image file, make sure that you have your Windows Firewall configured to enable inbound traffic to port 3260. Otherwise, this should be okay.
  4. Back in the iSCSI Initiator Properties page, click on the Targets tab. You should see a list of the iSCSI Targets that we have defined earlier
  5. Select one of the targets and click on the Log on button.
  6. In the Log On to Target dialog, select the Automatically restore this connection when the computer starts checkbox. Click OK.
  7. Once you are done, you should see the status of the target change to Connected. Repeat this process for all the target disks we initially created on both of the servers that will become nodes of your cluster.

Once the targets have been defined using the iSCSI Initiator tool, you can now bring the disks online, initialize them, and create new volumes using the Server Manager console. I won’t go into much detail on this process as it is similar to how we used to do it in Windows Server 2003, except for the new management console. After the disks have been initialized and volumes created, you can try logging in to the other server and verify that you can see the disks there as well. You can rescan the disks if they haven’t yet appeared.

Adding Windows Server 2008 Application Server Role

Since we will be installing SQL Server 2008 later on, we will have to add the Application Server role on both of the nodes. A server role is a program that allows Windows Server 2008 to perform a specific function for multiple clients within a network. To add the Application Server role,

  1. Open the Server Manager console and select Roles.
  2. Click the Add Roles link.  This will run the Add Roles Wizard
  3. In the Select Server Roles dialog box, select the Application Server checkbox. This will prompt you to add features required for Application Server role. Click Next.
  4. In the Application Server dialog box, click Next.
  5. In the Select Role Services dialog box, select Incoming Remote Transactions and Outgoing Remote Transactions checkboxes. These options will be used by MSDTC. Click Next
  6. In the Confirm Installation Selections dialog box, click Install. This will go thru the process of installing the Application Server role
  7. In the Installation Results dialog box, click Close. This completes the installation of the Application Server role on the first node. You will have to repeat this process for the other server

We have now gone thru the process of creating the cluster at this point. In the next tip in this series, we will go thru the process of installing the Failover Cluster feature, validating the nodes that will become a part of the cluster and creating the cluster itself. And that is just on the Windows side. Once we manage to create a working Windows Server 2008 cluster, that’s the only time we can proceed to install SQL Server 2008.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , | Leave a Comment »