Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘virtual infrastructure’

Automatic deployment of Windows OS in virtual infrastructures

Posted by Alin D on May 29, 2011

There are many ways to deploy a Windows operating system in your virtual infrastructure.

Most of us don’t have the expensive cloud automation tools that vendors sell today, so we stick to the basics and use clones, templates, unattended XML files and home-grown scripting to get our OS deployment done
But what if you had a truly automated process that could be used for both server and desktop OS deployment? And you could do both virtual and physical bare-metal OS deployment? And the OS deployment software was free?

I’ve been tinkering with Windows deployment in virtual infrastructures over the past few months and found different methods along the way. Not all suited my infrastructure, but any of them could work for you. Here’s the path I took on my search for a free, dynamic OS deployment method.

Using templates for OS deployment
The first OS deployment method I looked into was templates. On VMware, as well as other platforms, using templates to create multiple virtual machines (VMs) is a well-known and widely used practice. You create a VM with these steps:

installing the VM with the original install media
configuring/customizing the VM
installing applications,
updating the VM with patches and service packs
running Sysprep
and finally converting it to a static template.
Now that it’s a template, you can create what VMware calls customization scripts that enable small changes to certain aspects of a VM when it’s deployed from the template to an actual virtual server.

But these scripts are really just Sysprep setup files, and their capabilities are limited. This OS deployment method requires some extensive post-installation work, depending on the requirements of the deployment. Plus, you’ll have several templates, mostly static in nature, which consume quite a bit of valuable shared disk space. Overall, this is not my idea of a dynamic and scalable method for Windows deployment. It also requires vCenter Server, which is hardly free.
Open source OS deployment
I also looked at open source OS deployment methods. In the past, I’ve used an open source tool called the Ultimate Deployment Appliance (UDA), mainly for Linux and VMware ESX host deployment. It worked well, but it was somewhat difficult to navigate and required manual configurations to each ISO. Plus, Windows 7 deployments are still in beta (How long has this OS been out?) with version 2.0, and I decided not to pursue it.

I also found that Dell sells a deployment appliance called the Kace K2000, but two things bothered me about its description. First, it’s a much larger management tool, not just OS deployment software, which is overkill for my needs. And second, it isn’t free, so that put the nail in the coffin.

Microsoft Deployment Toolkit to the rescue
I was still looking for a good Windows deployment method, and a friend of mine sent me a link to the Microsoft Deployment Toolkit (MDT). Years ago, I researched MDT when it was a proprietary offering from Microsoft. Now MDT 2010 Update 1 is much more mature OS deployment software, better integrated with System Center Virtual Machine Manager, and best of all, it’s completely free* to everyone. It has a few prerequisites, but all of them are free as well, so I installed it to check out the Windows deployment possibilities.

Getting started with the MDT was easy, with its Microsoft Management Console-style interface and descriptive documentation. It also requires the Automated Installation Kit, which contains the core WinPE components necessary for OS deployment. Think of it this way: The MDT provides the back-end configuration for the front-end WinPE installation and OS deployment.

The MDT can also integrate with System Center Configuration Manager (SCCM) for a complete zero-touch Windows deployment method. You can also add the Office Customization Tool if you want to automate and customize Microsoft Office 2010 in your OS deployment.

I liked the idea of doing a lite-touch OS deployment instead, because SCCM isn’t free. I was really starting to like MDT, so I dove into the docs to learn as much as I could about this Windows deployment method.
As I began using the MDT, I ran into some very intelligent folks on the Web that had great implementations, ideas and scripts for Windows deployment tools. It was relatively easy to create a single sequence that could do virtual or physical OS deployment and provide selection screens and dropdowns for options.

Depending on which platform you use, installing drivers and applications is simple as well. One aspect that really made this OS deployment tool stick out was the ability to dynamically change the deployment or application on the fly — incorporating flexible option selections and scripting — for any install I wanted.

Diving into desktop deployment
I had soon completed all my Windows 2003 R2, 2008 and 2008 R2 OS deployments. I then ventured into the desktop OS, trying my luck with XP and Windows 7 as virtual desktops. After working out the kinks, configurations and scripting changes, the desktop deployments went off nearly without a hitch.

The only issues I encountered were single-application installs, which I knew would happen given the complexity of some of their installation routines. I also had a few driver issues when I started physical bare-metal OS deployments, which will happen no matter what method you use.

With the MDT, you have very granular control not only over the OS deployment itself, but also over the applications, drivers and update packages. And it uses the solid base of WinPE, which includes some of the OS deployment basics I mentioned above, such as unattended.xml files and home-grown scripting.

Overall, I was very pleased in my progress, but there is still more to learn and accomplish. OS deployments must fit your infrastructure’s size, feature needs and management capabilities, so I hope my search has give you some ideas for your own virtual infrastructure.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Best practices for virtualizing Exchange 2010 server roles

Posted by Alin D on May 26, 2011

When virtualizing Exchange Server 2010, it’s important to correctly configure virtual machines (VMs) that will host certain Exchange Server roles – otherwise a few gotchas will surface. For example, you’ll need to assign the correct number of processors to the appropriate amount of RAM. This tip gives more advice for correctly configuring VMs for optimal performance.

Client access and hub transport servers

Even in small Exchange environments, the Client Access Server role and the Hub Transport Server role are commonly collocated with the Mailbox Server role on a single computer. For organizations with hundreds or thousands of mailboxes, this is an all-in-one configuration.

Resource requirements for the CAS are usually less than what the hub transport server traditionally needs. Since every piece of mail in an Exchange Server 2010 organization flows through the hub transport server — for routing and messaging policy application purposes — it bears a heavy load.

Microsoft states that each role should be given a minimum of 2 GB of RAM, with a recommended setting of 1 GB per core for the hub transport server and 2 GB per core for the client access server. Comparative requirements for the client access server can be larger in environments that heavily rely on Outlook Web App (OWA).

As a rule, virtualization environments work best when VMs are configured with the smallest number of processors possible. The processor power requirements of your Exchange 2010 environment will determine that number. Starting small and working up is the best rule of thumb.

Microsoft suggests eight cores for the hub transport server and four cores for the client access server to support an organization of several mailbox servers and thousands of mailboxes. Organizations with fewer mailboxes and lower levels of non-MAPI client traffic could start with as few as two cores.

Edge transport servers

Edge transport servers typically consume lower levels of resource utilization. This level can depend on the rate of inbound and outbound mail flowing through your Exchange organization, along with your level of rejected mail. If your organization deploys spam filtering from an upstream provider, this reduction in traffic can affect edge transport resource utilization.

Microsoft also suggests configuring edge transport servers with a single processor core and up to a maximum of twelve cores — if your virtual platform will support them. RAM requirements start with a minimum of 2 GB with 1 GB per core. You can always add more memory later if you experience excessive paging, poor performance or an email build-up in messaging queues.

Unified messaging servers

This is easy — don’t virtualize unified messaging servers. Microsoft does not support them. Unified messaging servers require substantial processing power and have little tolerance for processing latency. Microsoft’s suggestion for even a small Exchange organization is a minimum of two to four cores and 4 GB of RAM, with 2 GB per core of RAM. If you’re using unified messaging, I advise that you steer clear of virtualization for now.

Mailbox servers

Mailbox servers are the heavy-lifters in any Exchange infrastructure; they are where resources are used in larger quantities. You might want to consider virtualizing mailbox servers last. While they can be virtualized, they require careful evaluation before beginning. Experience gained from virtualizing Exchange’s other less-challenging roles will help.

Microsoft states that a four-core mailbox server should be able to support several thousands of mailboxes. The RAM recommendation for that server starts with 4 GB of RAM plus an additional 3 to 30 MB per mailbox. Virtualizing any mailbox server will consume a large share of your virtual host’s available RAM resources, so plan accordingly and don’t oversubscribe your virtual infrastructure.

Exchange Server 2010 virtualization gotchas

Exchange Server organization’s mail usage characteristics have more to do with these calculations than any RAM or core numbers suggested here. Measuring your existing metrics on physical servers is the first step in preparing to virtualize Exchange Server. Measured over time, those metrics are necessary to determine your starting points for processor and memory resource assignment.

Microsoft states that relying on someone else’s numbers is a flawed assumption. In Understanding Server Role Rations and Exchange Performance, Microsoft notes, “A significant percentage of the server processing is associated with the overhead of analyzing connections and scanning accepted messages. For this reason, it’s not possible to provide a sizing metric based solely on the number of messages sent and received per second….”

While this quote relates specifically to the activities within the edge transport server, it’s also good advice for the other roles. If you don’t correctly determine a baseline for your Exchange 2010 server role performance before you begin a virtualization project, any or all of the roles may cause trouble.

Posted in Exchange | Tagged: , , , , , , , , , , , | Leave a Comment »

Good to know about virtualizing Exchange 2010 server roles

Posted by Alin D on May 9, 2011

When virtualizing Exchange Server 2010, it’s important to correctly configure virtual machines (VMs) that will host certain Exchange Server roles – otherwise a few gotchas will surface. For example, you’ll need to assign the correct number of processors to the appropriate amount of RAM. This tip gives more advice for correctly configuring VMs for optimal performance.

Client access and hub transport servers

Even in small Exchange environments, the Client Access Server role and the Hub Transport Server role are commonly collocated with the Mailbox Server role on a single computer. For organizations with hundreds or thousands of mailboxes, this is an all-in-one configuration.

Resource requirements for the CAS are usually less than what the hub transport server traditionally needs. Since every piece of mail in an Exchange Server 2010 organization flows through the hub transport server — for routing and messaging policy application purposes — it bears a heavy load.

Microsoft states that each role should be given a minimum of 2 GB of RAM, with a recommended setting of 1 GB per core for the hub transport server and 2 GB per core for the client access server. Comparative requirements for the client access server can be larger in environments that heavily rely on Outlook Web App (OWA).

As a rule, virtualization environments work best when VMs are configured with the smallest number of processors possible. The processor power requirements of your Exchange 2010 environment will determine that number. Starting small and working up is the best rule of thumb.

Microsoft suggests eight cores for the hub transport server and four cores for the client access server to support an organization of several mailbox servers and thousands of mailboxes. Organizations with fewer mailboxes and lower levels of non-MAPI client traffic could start with as few as two cores.

Edge transport servers

Edge transport servers typically consume lower levels of resource utilization. This level can depend on the rate of inbound and outbound mail flowing through your Exchange organization, along with your level of rejected mail. If your organization deploys spam filtering from an upstream provider, this reduction in traffic can affect edge transport resource utilization.

Microsoft also suggests configuring edge transport servers with a single processor core and up to a maximum of twelve cores — if your virtual platform will support them. RAM requirements start with a minimum of 2 GB with 1 GB per core. You can always add more memory later if you experience excessive paging, poor performance or an email build-up in messaging queues.

Unified messaging servers

This is easy — don’t virtualize unified messaging servers. Microsoft does not support them. Unified messaging servers require substantial processing power and have little tolerance for processing latency. Microsoft’s suggestion for even a small Exchange organization is a minimum of two to four cores and 4 GB of RAM, with 2 GB per core of RAM. If you’re using unified messaging, I advise that you steer clear of virtualization for now.

Mailbox servers

Mailbox servers are the heavy-lifters in any Exchange infrastructure; they are where resources are used in larger quantities. You might want to consider virtualizing mailbox servers last. While they can be virtualized, they require careful evaluation before beginning. Experience gained from virtualizing Exchange’s other less-challenging roles will help.

Microsoft states that a four-core mailbox server should be able to support several thousands of mailboxes. The RAM recommendation for that server starts with 4 GB of RAM plus an additional 3 to 30 MB per mailbox. Virtualizing any mailbox server will consume a large share of your virtual host’s available RAM resources, so plan accordingly and don’t oversubscribe your virtual infrastructure.

Exchange Server 2010 virtualization gotchas

Exchange Server organization’s mail usage characteristics have more to do with these calculations than any RAM or core numbers suggested here. Measuring your existing metrics on physical servers is the first step in preparing to virtualize Exchange Server. Measured over time, those metrics are necessary to determine your starting points for processor and memory resource assignment.

Microsoft states that relying on someone else’s numbers is a flawed assumption. In Understanding Server Role Rations and Exchange Performance, Microsoft notes, "A significant percentage of the server processing is associated with the overhead of analyzing connections and scanning accepted messages. For this reason, it’s not possible to provide a sizing metric based solely on the number of messages sent and received per second…."

While this quote relates specifically to the activities within the edge transport server, it’s also good advice for the other roles. If you don’t correctly determine a baseline for your Exchange 2010 server role performance before you begin a virtualization project, any or all of the roles may cause trouble.

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

Technologies behind Microsoft Hyper-V Cloud

Posted by Alin D on January 25, 2011

Microsoft’s Hyper-V Cloud represents a collection of software, hardware, management and business process integration that evolves simple virtualization to a fully-realized private cloud. But if you’re the IT professional whose job it is to construct a Hyper-V Cloud, what kinds of line items will be on your bill of materials?

At the core of Hyper-V Cloud is, unsurprisingly, Microsoft Hyper-V. Operating as its singular hypervisor, Hyper-V is the platform upon which all your virtual infrastructure resides. Hyper-V is the hypervisor that drives Microsoft’s virtual machines (VMs), which are the workloads that IT intends to manage and maintain.

Another part of this portfolio is a management studio that collects all your virtual assets together under a single pane of glass. In today’s manifestation of Hyper-V Cloud, that management studio is System Center Virtual Machine Manager 2008 R2. Yes, that’s the same Virtual Machine Manager (VMM) that you’ve seen before, and there are very few differences between the one you know and the one Hyper-V Cloud advertises.

Also key to Hyper-V Cloud is the hardware on which its virtual infrastructure sits. Hyper-V Cloud is arguably more about this hardware than any of the software that IT pros have already been using.

“But why?” you might ask. “What’s so exciting about it? Does it look different, or perform its actions in a fundamentally different way?” Not entirely, but what isdifferent is how that hardware is pieced together, along with what happens once it is built.

Microsoft has put together a relatively sparse page that lists its Hyper-V Cloud partnerships with many hardware vendors that have been around for years. What this website doesn’t really explain is how those hardware vendors are evolving their products to better fit into the private cloud resource management model.

Private cloud moves forward with Hyper-V Cloud
There’s one specific tab, however, that gives away the real meaning behind Hyper-V Cloud: Get Pre-validated Configurations. One of the central tenets of Hyper-V Cloud is that Microsoft’s partners realize the ineffectiveness of how virtualization used to be constructed, along with its inability to optimize IT spending.

An analogy here works best. Remember when building servers by hand was all the rage? Our industry still calls this process “white boxing,” as most of the cases for these do-it-yourself servers were white in color. If you took a look inside those white boxes, you might find a motherboard from one vendor, a set of RAM from another and disk drives from a third. Typically, no two white boxes were alike, because parts were added based on daily demands.

We know now that building a white box wasn’t the best use of time or energy. Fifty different servers with 50 different hardware configurations made for increasingly challenging and expensive server management. And while we’ve stopped that horrible practice with our servers, we’ve picked it back up again — out of necessity — with our virtual environments.

I say “out of necessity” because, until recently, constructing a virtual environment (or its evolved relative, the private cloud) required the white-boxing approach. There was no way to add a private cloud to the shopping cart on our hardware vendor’s website. You needed a few servers from one vendor, some storage from another and networking from a third. Often times, the servers and storage were very different from each other.

As a result, many of virtualization’s easy wins budget-wise died quick deaths as a result of non-optimized hardware and lack of experience in connecting the pieces.

Hyper-V Cloud, and indeed private cloud computing in general, look to move past white boxing through Pre-validated Configurations. These configurations comprise hardware designed with virtualization in mind. But more importantly, they’re like selecting stockkeeping units on a website: “Need a virtual environment? Here’s one that’ll support X number of VMs. It’ll be delivered on Friday. Need to add more resources to your virtual environment? Click here to purchase the necessary modules.”

Leaning on the expertise of hardware vendors gives IT professionals the flexibility to quickly create private cloud resource pools that are pre-configured, pre-validated and able to support a known (and with some vendors, asserted) level of service. That’s good for business, because purchases are significantly more plannable. It’s also great for IT, because what arrives will be an environment already set up with the necessary performance and capacity levels.

There’s a third part to this discussion of Hyper-V Cloud, and it’s a new way to think about the four core resources in our data center: Processing, memory, storage and networking. I call it the “economics of resources.” In the final tip of this series, find out how Hyper-V Cloud, as well as private cloud computing in general, takes a cue from Economics 101 to quantify resources through supply and demand.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »