Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘Linux’

Exchange Server virtualization improved in vSphere 5

Posted by Alin D on January 23, 2012

There are many new features in VMware vSphere 5 that supercharge performance, storage and quality-of-service of virtualized servers. Some of the improvements are particularly beneficial for virtualized Exchange servers.

Out of the 140-plus new features in vSphere 5, here are five that offer the most value to Exchange Server virtualization:

Storage Distributed Resource Scheduler

One of the most impressive features in vSphere 5 is Storage Distributed Resource Scheduler (SDRS). A traditional VMware Distributed Resource Scheduler (DRS) automatically places virtual machines (VM) onto servers with low CPU and RAM resource utilization to support the VM requirements. DRS also automatically load balances VMs by dynamically moving them from one host to another if they aren’t receiving necessary resources.

The new SDRS tool performs both these very powerful functions, but for virtual storage. In other words, SDRS:

  • Places VM disk files onto shared storage that has the necessary space and storage latency;
  • Balances VM disk files across shared storage to ensure optimal storage performance; and
  • Balances VM disk files across shared storage to ensure the VM has the space it needs.

If your VM’s data store runs out of space, SDRS moves the VM disk file to another data store that contains the necessary space. Additionally, if a VM’s data store isn’t performing particularly well, SDRS moves the VM disk file to the data store that offers the best performance.

In vSphere 5, VMware also introduces the concept of the “data store cluster” which is simply a group of data stores. You can use data store clusters with or without SDRS.

Exchange servers need proper storage I/O to perform optimally. SDRS automatically resolves storage I/O and storage capacity issues, preventing slowness and outages for your Exchange users.

 VMware vSphere 5 virtual machine file system and VM scalability enhancements

The latest version of vSphere also offers increased scalability for VMs and the virtual machine file system (VMFS). Here are some specific improvements:

  • 512 virtual machines per host;
  • Up to 160 CPU threads (logical pCPUs) per host;
  • Up to 2 TB of RAM per host;
  • Up to 32 vCPUs per VM;
  • Up to 1 TB of RAM per VM; and
  • Up to 2 TB for new data stores.

These enhancements help when your Exchange VMs grow and also when you need to create large files on the VMFS.

 The vSphere 5 Storage Appliance

VMware also introduces the concept of the vSphere Storage Appliance (VSA) in vSphere 5, which is essentially a virtual network attached storage (NAS) option. It is fully supported by VMware for all advanced vSphere features like vMotion, DRS, VMware High Availability (VMHA) and even Site Recovery Manager (SRM). The downside is that you must purchase it separately.

VSA uses local storage from either two or three VMware ESXi servers, plus vCenter. These servers must be identical and fresh installs of ESXi. These servers also can’t have any VMs running on them — not even vCenter. The local storage is presented by the VSA as a network file system NAS share.

VSA is meant for small- to medium-sized businesses that don’t already have a storage area network (SAN) or NAS and use VMware for server virtualization. The VSA is a great fit for remote office/branch office locations where it’s hard to justify the cost of a NAS.

The VSA does offer a unique benefit in that if an ESXi host is lost, VMs running across the VSA keep working without downtime (Figure 1). Thus, VSA offers better high availability and redundancy than a hardware-based NAS/SAN at a much lower price than redundant NAS/SAN.

How the vSphere 5 storage appliance works

 

So, how does VSA help virtualized Exchange infrastructures? Well, I’m not sure I’d recommend the new VSA as the single NAS/SAN in a large datacenter with hundreds of VMs – including Exchange – hitting it.

But the VSA is ideal for branch offices of a larger company that require a local Exchange infrastructure. The VSA helps you bypass dedicated NAS hardware, while still achieving high availability, making it a strong option for shared storage.

VMware vSphere replication

Before vSphere 5, you could only protect virtual infrastructures using either VMware Site Recovery Manager (SRM) with a hardware-based SAN or an application-specific recovery tool. Both options were poor value for your investment. You either had to purchase two hardware-based SANs with replication — one for each datacenter — or spend a lot to protect a single application like Exchange Server.

With vSphere 5 and SRM5, VMware announced the option for “host-based replication.” This means that an ESXi server replicates directly to another ESXi server at a backup datacenter, eliminating the need for two hardware-based SANs with replication.

Alternatively, you can replicate from a hardware-based SAN that you may have already invested in to a different SAN at a backup site. This is a huge cost savings for all types of companies because it allows disaster recovery to happen on a per-VM and per-application basis.

VMware sells the SRM5 host-based replication option for under $200 per VM with a minimum of 25 VMs; that translates out to about $5,000. That’s a much better value than the other options.

As you can see, this has the potential to tremendously reduce the cost of protecting virtualized Exchange servers with vSphere 4.1, or even physical Exchange servers.

The vSphere 5 vCenter Server Appliance (vCSA) and vSphere Web Client

My new favorites in vSphere 5 are the vCenter Server Appliance and the new vSphere Web Client.

The vCenter Server Appliance (vCSA) is a virtual appliance you can import into your infrastructure to get vCenter up and running fast. Besides saving time on the Windows install, database install and vCenter application install, vCSA saves money because you don’t have to buy a another Windows Server license.

Not only is it free with all vSphere 5 licenses, but it also enables the new vSphere Web Client by default; you don’t have to install anything. While the vSphere Web Client doesn’t do everything that the vSphere Windows client does, it does do about 80% of what you will need so it’s a nice option for typical day-to-day virtualization admin tasks.

 A look at the VMware vSphere 5 Web Client

 

VMware said that the vCSA (virtual Linux-based vCenter appliance) and the vSphere Web Client are the direction they are going in the future so we might as well start learning about these new options, now.

As you can see, it’s a good idea to use vSphere 5 for Exchange Server virtualization because it offers innovative features, better scalability, easy administration and the best disaster recovery options. You can learn more about vSphere 5 here.

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

How to use Windows Network Load Balancing to load balance Exchange 2010

Posted by Alin D on November 13, 2011

When administrators consider load balancing their Exchange 2010 installations, they often turn to dedicated — and frequently expensive — hardware products. Fortunately, if you’re Linux-savvy, a free load-balancing option is available. If not, that’s alright, help is on the way.

You can use Windows Network Load Balancing to load balance Exchange, but several limitations make it impractical for certain Exchange deployments. For example, you can’t add more than eight nodes in a Network Load Balancing cluster. You also can’t combineWindows Failover Clustering and Network Load Balancing because they can’t interact with each other.

In cases like these, you need external assistance. Help usually comes in the form of hardware-based load balancers. Unfortunately, those products aren’t cheap. Prices typically start around $1,500 for low-end models and quickly soar into the tens of thousands of dollars.

Most companies don’t have to spend that kind of money though. You can use a free virtual-software appliance that acts as a load balancer. This appliance can be installed on a repurposed server or even in a virtual machine (VM) on shared hardware. All you’re really “spending” is the time and effort to get it up and running.

Your free load-balancing options for Exchange 2010
One such appliance is HAProxya Linux-based Layer 4 load balancer for TCP and HTTP applications. There are already a number of third-party products like redWall’s Firewall and Exceliance’s HAPEE distribution that use the tool, as well as many satisfied users — the Fedora Project, Reddit, StackOverflow and many more.

You must be comfortable with Linux to use HAProxy in your Exchange 2010 production environment. If not, Microsoft-certified systems administrator Steve Goodman created the Exchange 2010 HAProxy Virtual Load Balancer.

The appliance is a pre-packaged version of HAProxy, built on Ubuntu Linux, that can be deployed on VMware vSphere orMicrosoft Hyper-V with minimal work required by an Exchange administrator.

All you need is a solid understanding of your network topology and some familiarity with either VMware or Hyper-V. While you don’t need to fully understand Linux to install Goodman’s appliance, it does help to know about the OS if you want to fine-tune aspects of the tool that aren’t available through the Web interface. That said, you can get the HAProxy Virtual Load Balancer up and running in your Exchange 2010 lab environment without being a Linux expert.

The appliance comes in two formats: a VMware vSphere .ovf file and a Hyper-V-compatible .vhd file. The tool’s website contains step-by-step instructions on how to set up HAProxy on either vSphere or Hyper-V.

Setting up the Exchange 2010 HAProxy Virtual Load Balancer
Boot the appliance and you’re greeted with a simple console login screen. To begin, type inroot as your username and setup as your password. You will be prompted to choose a new password. This secures the setup process; you can change the password later on.

Next comes the most important part of the setup. You must set the IP address, netmask and default gateway for HAProxy. If you mistype anything, press Ctrl+C to get out of the script, type logout to leave, then log back in. Remember to use your new password, then repeat the login process. After you complete the first step, you will be given a URL; make sure to write it down. You will be prompted to log back in when HAProxy reboots.

The rest of the setup process — as well as most HAProxy management — is done through HAProxy’s Web interface. Configure the static RPC ports for your client access servers, then list the IP addresses of each of the client access servers you want to balance. You must also set the time zone and the network time protocol (NTP) servers. Don’t touch the console login screen unless there’s an overwhelming reason to do so.

While the HAProxy Virtual Load Balancer has been through plenty of development, the virtual appliance is still a work in progress. For example, HAProxy is a Layer 4 (TCP) balancer, not a Layer 7 (application-level) balancer. It is not completely “Exchange-aware,” so it can’t do things like application-level monitoring or SSL offloading — at least, not yet.

These items may eventually be added, and it sounds like Goodman plans to further improve the tool. ”Subsequent versions will be production ready, as this is totally aimed at being an easy-to-use free alternative to paid-for hardware and virtual load balancers for Exchange 2010,” Goodman said.

 

Posted in TUTORIALS | Tagged: , , , , , , , , , , , | Leave a Comment »

Why to use ASLR to increase Windows security

Posted by Alin D on June 21, 2011

Imagine this: a security control built right into Windows Server (enabled by default) that helps stop malware in its tracks as soon as the OS starts booting. Well, if you’re running Windows Server 2008 or R2, you’ve already got such protection. In fact, your enterprise clients running Windows Vista and Windows 7 have it as well. It’s called address space layout randomization (ASLR).

ASLR helps prevent buffer overflow attacks by randomizing the location where system executables are loaded into memory. If a DLL has its dynamic-relocation flag set then its location in memory is automatically randomized. Malware looking for certain files in specific memory locations is thus tripped up and cannot run its exploit. In fact, ASLR could cause the malware to draw attention to itself by crashing the very system file(s) it’s attacking.

nother neat thing about ASLR is that it plays nicely with Dynamic Memory, the new Windows Server 2008 R2 SP1 feature that dynamically allocates system memory to Hyper-V virtual machines as needed. Furthermore, ASLR has a negligible performance impact on client performance.

Of course, ASLR really isn’t anything new. Third-party endpoint protection vendors have offered ASLR for years. Ditto with Linux. But while Microsoft was a bit late to the game on this one, the only thing that matters now is that the company is building preventative controls against malware directly into the Windows OS — arguably where it should’ve been all along.

So, why worry about a security control like ASLR on Windows Server-based systems? I still see a lot of servers that aren’t running anti-malware software in the name of performance or because they “don’t use this server for anything but file sharing and Active Directory management anyway.” The problem is that these servers are wide open for attack. The bad guys and their code don’t discriminate, making the servers fair game for numerous malware and vulnerability exploits.

Not everything is rosy with ASLR though, so you can’t forget the law of unintended consequences and let your guard down. Here are some things you can’t afford to overlook:

  • ASLR works with any DLL file that that’s been written to support it, which means you’ve got to trust that your developers and vendors have enabled it for their code.
  • It could lead to a false sense of security on Windows-based systems and thus a lack of maintenance and oversight for traditional malware protection, patch management and poorly-written code.
  • There is the potential that ASLR could create certain system instabilities and performance issues by fragmenting memory over time.
  • It’s not necessarily a prevention technology as much as it is an evasion technology. Presumably, malware could eventually detect/crack the location of the system files it’s trying to hook in to as outlined in this informative paper on PaX security. All things considered, however, this delves into the area of diminishing security returns and residual risk that everyone has to deal with in some fashion, so I’m not convinced you should let these issues keep you up at night.

One more thing to note is that to gain the full benefits of ASLR, it needs to be used in conjunction with Data Execution Prevention (DEP), a built-in memory protection feature designed to help protect applications from exploits. Fortunately DEP is also enabled by default in Windows Server 2003 SP1 and up.

All in all, ASLR is a step in the right direction in the fight against malware in the enterprise, but only time will tell just how effective it truly is.

 

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

Best practice for a good Microsoft IIS 7 Security

Posted by Alin D on June 21, 2011

Microsoft’s Internet Information Services (IIS) Web server has presented enterprises with more than its share of security problems over the years, including the infamous Code Red worm nearly a decade ago. A key security concern with IIS has always been the number of features that are automatically installed and enabled by default, such as scripting and virtual directories, many of which proved vulnerable to exploit and led to major security incidents.

With the release of IIS 6 a few years ago, a “lockdown by default” approach was introduced with several features either not being installed or installed but disabled by default. IIS 7, the newest iteration, goes even further. It’s not even installed on Windows Server 2008 by default, and when it is installed, the Web server is configured to serve only static content with anonymous authentication and local administration, resulting in the simplest of Web servers and the smallest attack surface possible to would-be hackers.

This is possible because IIS 7 is completely modularized. Let’s briefly dig into why that is and how it enables a more secure product. Essentially administrators can select from more than 40 separate feature modules to completely customize their installation. By only installing the feature modules required for a particular website, administrators can greatly reduce the potential attack surface and minimize resource utilization.

Be aware, however, this is true only with a clean install. If you are upgrading your Windows OS and running an earlier version of IIS, all the metabase and IIS state information is gathered and persevered. Consequently, many unnecessary Web server features can be installed during an upgrade. Therefore, it is good practice for an organization to revisit

its application dependencies on IIS functionality after an upgrade and uninstall of any unneeded IIS modules.

Fewer components also means there are fewer settings to manage and problems to patch as it’s only necessary to maintain the subset of modules that are actually being used. This reduces downtime and improves reliability. Also, the IIS Management Console, with all its confusing tabs, has been replaced with a far more intuitive GUI tool, which makes it easier to visualize and understand how security settings are implemented. For example, if the component supporting basic authentication is not installed on your system, the configuration setting for it doesn’t appear and confuse matters.

So what components are likely to be needed to run a secure IIS? The first six listed below will be required by any website running more than just static pages, while seven and eight will be necessary for anyone needing to encrypt data between the server and client, while shared configuration is useful when you have a Web farm and want each Web server in the farm to use the same configuration files and encryption keys:

  1. Authentication includes integrated Windows authentication, client certificate authentication and ASP.NET forms-based authentication, which lets you manage client registration and authentication at the application level, instead of relying on Windows accounts. 
  2. URL Authorization, which integrates nicely with ASP.NET Membership and Role Management, grants or denies access to URLs within your application based on user names and roles so you can prevent users who are not members of a specific group from accessing restricted content. 
  3. IPv4 Address and Domain Name Rules provide content access based on IP Address and Domain Name. The new property “allowUnlisted” makes it a lot easier to prevent access to all IP addresses unless they are listed. 
  4. CGI and ISAPI restrictions allow you to enable and disable dynamic content in the form of CGI files (.exe) and ISAPI extensions (.dll). 
  5. Request filters incorporate the functionality of the UrlScan tool restricting the types of HTTP requests that IIS 7 will process by rejecting requests containing suspicious data. Like Apache’s mod_rewrite, it can use regular expressions to block attacks or modify requests based on verb, file extension, size, namespace and sequences. 
  6. Logging now provides real-time state information about application pools, processes, sites, application domains and running requests as well as the ability to track a request throughout the complete request-and-response process. 
  7. Server Certificates 
  8. Secure Sockets Layer 
  9. Shared Configuration

Other features that enhance the overall security of IIS 7 are new built-in user and group accounts dedicated to the Web server. This enables a common security identifier (SID) to be used across machines, which simplifies access control list management, and application pool sandboxing. Server administrators meanwhile have complete control over what settings are configurable by application administrators, while allowing them to make any configuration changes directly in their application without having administrative access to the server.

IIS 7 is quite a different beast as compared with previous incarnations, and that’s a good thing. It has been designed and built along classic security principles and it gives Windows-based organizations a Web server that can be more securely configured and managed than ever before. There may still not be enough from a security perspective to sway Linux and Apache shops to change to IIS anytime soon, but Microsoft has definitely narrowed the security gap between them. It will take administrators a while to get use to the new modular format and administrative tools and tasks. The training and testing time will be worth it though as it is an OS and framework that administrators are familiar with.

 

 

 

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Windows 2008 VPS – Must Read Before you decide to Book the Server

Posted by Alin D on March 8, 2011

windows 2008

Windows 2008 VPS – Must Read Before you decide to Book the Server

Windows VPS or Windows Vps is a method that will split a server into multiple parts. Each server that is virtual can have a main system on a full-fledged basis then one may reboot each server independently.

The Windows VPS is a boon to the minute mid-sized business which do not want to have expensive server running and at the same time this business might be too large for the web solution hosting on a shared basis. Hence, the server option would be a structured the one which brings the very best of both parties to you.

First, according to the sort of package you subscribe for, the package will manage your backups totally as your representative. This tends to guarantee that the server is copied daily so as to restore it at any time, according to when rrt had been restored. All you should do should be to put through a request towards the technical department.

Second, the Windows Virtual Private Server, especially the Windows 2008 Virtual Private Server, has the facility of normal system updates. Your system updates are executed and managed fairly often these are updated as soon as the newest updates are freed.

Third, it is simple to perform the ability monitoring of your system. You’re free to recognize how much resource usage you will require at any given point of your energy. The hosting center of systems will precisely track the usage and will send recommendations regarding the style of hosting packages that will the best option, given your needs.

Fourth, adidas and puma which can be supplying the server support in addition have excellent customer support system. You may just give then a call and they will be at your service. So that you can be assured that the troubleshooting will never be delayed at any cost.

Fifth, it’s also possible to do a little availability monitoring. The cost otherwise know about the potential problems your server may perhaps be facing. This is usually a real irritant as every time a problem actually crops up, it may possibly catch you napping. After which you need to do a couple of crazy playing around to mend the issue. In this case, however, that you are alerted associated with an impending problem far ahead of time so as to stay prepared and take precautionary steps to seek out remedies to the problem.

Linux VPS The best is thus a great mixture of efficiency and affordability. This can be an ideal solution with the smaller and medium sized businesses that might have financial constraints but will still emphasize of good performance. Additionally, the client support is nice too. linux server

Article from articlesbase.com

Posted in Windows 2008 | Tagged: , , , , , , , , , | 1 Comment »

Windows 2008 VPS – Must Read When you Book the Server

Posted by Alin D on March 7, 2011

windows 2008

Windows 2008 VPS – Must Read When you Book the Server

Windows VPS or Windows Vps is a method that will split a server into multiple parts. Each server that’s virtual will surely have its very own os in this handset over a full-fledged basis and the other might also reboot each server independently.

I want to now consider the relative benefits of Windows VPS rather than a Linux VPS.

First, depending on method of package you subscribe for, the package will manage your backups totally on your behalf. This can make sure that the server is stored daily so that you can restore it whenever you want, dependant upon when it had been restored. All you need to do will be to the subject of a request on the technical department.

Second, the Windows Vps, particularly the Windows 2008 Vps, gets the facility of regular system updates. One’s body updates are done and managed regularly which are updated the second the modern updates are let go.

Third, you can actually perform the capacity monitoring from the system. You’re able to know the way much resource usage you will need at a point of one’s. The hosting center of systems will precisely track the usage all of which will mail out recommendations for the kind of hosting packages that may the most appropriate, given your preferences.

Fourth, the businesses which might be providing the server support also have excellent support services system. You are able to just give a call and they’ll be your service. So you’re able to be confident that the troubleshooting aren’t going to be delayed whatever it takes.

Fifth, additionally you can do a couple of availability monitoring. You wouldn’t otherwise know of the potential problems your server might be facing. This can be a real irritant as any time a problem actually crops up, it may catch you napping. After which it you have got to perform some crazy seen to fix the challenge. In such cases, however, you are alerted of any impending problem right so that you can stay prepared and take precautionary steps to locate remedies for the problem.

Linux VPSThe answer is thus an excellent combination of efficiency and affordability. It is an ideal solution for the smaller and medium-sized businesses which could have financial budgeting but will still emphasize of high end. Additionally, the buyer support is an useful one too. virtual hosting

Article from articlesbase.com

Posted in Windows 2008 | Tagged: , , , , , , , | Leave a Comment »

Windows 2008 VPS – Must Read Prior to deciding to Book the Server

Posted by Alin D on March 7, 2011

windows 2008

Windows 2008 VPS – Must Read Prior to deciding to Book the Server

Windows VPS or Windows Vps is a technique which will split a server into multiple parts. Each server that is certainly virtual might have a operating-system on a full-fledged basis and the other may also reboot each server independently.

The Windows VPS is often a boon with the promising small to mid-sized business which cannot afford a great expensive server running possibly at the same time the company can be too large for any web solution hosting over a shared basis. Hence, the server solution is a comprehensive a bed that brings the very best of both parties to you.

Why don’t we now take into account the relative aspects of Windows VPS rather than a Linux VPS.

First, according to the sort of package you subscribe for, the package will manage your backups totally as your representative. This tends to ensure that the server is stored daily to enable you to restore it without notice, according to when it had been restored. All you should do is usually to subjected to a request towards technical department.

Second, the Windows Vps, especially the Windows 2008 Virtual Private Server, provides the facility of regular system updates. Your whole body updates are performed and managed on a regular basis which are updated the minute bigger updates are freed.

Third, it’s easy to perform the capability monitoring on the system. You’re able to learn how much resource usage you will require at any point of your time. The hosting center of systems will precisely track the usage and definitely will distribute recommendations about the type of hosting packages that may the best option, given the needs you have.

Fourth, the lenders which are providing the server support also provide excellent customer care system. You may just give then the call and they’ll be your service. So you can be confident that the troubleshooting are not delayed whatever it takes.

Fifth, it’s also possible to perform some availability monitoring. You would not otherwise know about the potential problems your server may perhaps be facing. This can be a real irritant as if a problem actually crops up, it may possibly catch you napping. And then you need to do a couple of crazy seen to repair the problem. In this case, however, you will be alerted of any impending problem with plenty of forethought to help you stay prepared and take precautionary steps to discover remedies towards the problem.

<a rel=”nofollow” onclick=”javascript:_gaq.push([‘_trackPageview’, ‘/outgoing/article_exit_link’]);” href=”http://www.pandela.com/”>Linux VPS</a> The best is thus a great blend of efficiency and affordability. This is actually the ideal solution for your smaller and medium sized businesses which might have financial budgeting but will still emphasize of top rated. Additionally, the consumer support is a useful one too. <a rel=”nofollow” onclick=”javascript:_gaq.push([‘_trackPageview’, ‘/outgoing/article_exit_link’]);” href=”http://www.pandela.com/”>linux hosting</a>

Article from articlesbase.com

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , | Leave a Comment »

Windows 7 and Windows Server 2008 R2 certification awarded to Perle Systems Serial and Parallel Cards

Posted by Alin D on March 3, 2011

windows 2008 r2

Windows 7 and Windows Server 2008 R2 certification awarded to Perle Systems Serial and Parallel Cards

NASHVILLE, TN—October 1, 2009— Perle Systems, the global developer and manufacturer of serial connectivity solutions today announced Windows 7 and Windows Server 2008 R2 certification for their full range of SPEED and UltraPort Serial Cards and SPEED Parallel cards. Perle is the first major serial connectivity company to have a digitally signed Microsoft driver for both 32-bit and 64-bit versions of Windows 7 and Windows Server 2008 R2.  All drivers can be downloaded from Perle’s website.

“Perle Systems continues lead the industry when it comes to support for the widest range of operating systems.” comments Julie McDaniel, Vice President Marketing, Perle Systems. She continues, ”Our users can be confident that our full line of serial and parallel cards will continue to operate on Microsoft’s latest operating systems. This certification and early adoption to a new standard demonstrates Perle’s commitment to customers for long term investment protection and support.”

Microsoft’s Windows 7 and Windows Server 2008 R2 Certification of products is granted after passing a series of rigorous testing. Once a product is certified the company earns the right to use the highly respected Microsoft Windows 7 and Windows Server 2008 R2 logo. The authorized use of this logo is proof that a product or solution has met the stringent criteria set out by Microsoft indicating reliability and technical excellence.

Perle’s Serial and Parallel Card lines enable you to easily add RS232, RS422, RS485 serial or parallel ports to your PC or server. Compatible with PCI, PCI-X or PCI Express bus slots, Perle cards are the only products that support all major operating systems including Windows, Vista, Linux, Solaris, SPARC as well as SCO.

About Perle Systems – http://www.perle.com:
Perle Systems is a leading developer, manufacturer and vendor of high-reliability and richly featured serial to Ethernet networking products. These products are used to connect remote users reliably and securely to central servers for a wide variety of business applications. Product lines include Console Servers for Data Center Management, Terminal Servers, Device Servers, Ethernet I/O and Serial Cards. Perle distinguishes itself through extensive networking technology, depth of experience in major real-world network environments and long-term distribution and VAR channel relationships in major world markets. Perle has offices and representative offices in 11 countries in North America, Europe and Asia and sells its products through distribution and OEM/ODE channels worldwide.

Article from articlesbase.com

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , | Leave a Comment »

PowerShell commands in Windows Server 2008 R2

Posted by Alin D on January 25, 2011

Shells are a necessity in using operating systems. They give the ability to execute arbitrary commands as a user and the ability to traverse the file system. Anybody who has used a computer has dealt with a shell by either typing commands at a prompt or clicking an icon to start a word processing application. A shell is something that every user uses in some fashion. It’s inescapable in whatever form when working on a computer system.

Until now, Windows users and administrators primarily have used the Windows Explorer or cmd command prompt (both shells) to interact with most versions of the Window operating systems. With Microsoft’s release of PowerShell, both a new shell and scripting language, the current standard for interacting with and managing Windows is rapidly changing. This change became very evident with the release of Microsoft Exchange Server 2007, which used PowerShell as its management backbone, the addition of PowerShell as a feature within Windows Server 2008, and now the inclusion of PowerShell as part of the Windows 7 and Windows Server 2008 R2 operating systems.

In this chapter, we take a closer look at what shells are and how they have developed. Next, we review Microsoft’s past attempts at providing an automation interface (WSH) and then introduce PowerShell. From there, we step into understanding the PowerShell features and how to use it to manage Windows 2008. Finally, we review some best practices for using PowerShell.

Understanding Shells

A shell is an interface that enables users to interact with the operating system. A shell isn’t considered an application because of its inescapable nature, but it’s the same as any other process running on a system. The difference between a shell and an application is that a shell’s purpose is to enable users to run other applications. In some operating systems (such as UNIX, Linux, and VMS), the shell is a command-line interface (CLI); in other operating systems (such as Windows and Mac OS X), the shell is a graphical user interface (GUI).

Both CLI and GUI shells have benefits and drawbacks. For example, most CLI shells allow powerful command chaining (using commands that feed their output into other commands for further processing; this is commonly referred to as the pipeline). GUI shells, however, require commands to be completely self-contained. Furthermore, most GUI shells are easy to navigate, whereas CLI shells require a preexisting knowledge of the system to avoid attempting several commands to discern the location and direction to head in when completing an automation task. Therefore, choosing which shell to use depends on your comfort level and what’s best suited to perform the task at hand.

Note:
Even though GUI shells exist, the term “shell” is used almost exclusively to describe a command-line environment, not a task that is performed with a GUI application, such as Windows Explorer. Likewise, shell scripting refers to collecting commands normally entered on the command line or into an executable file.

A Short History of Shells

The first shell in wide use was the Bourne shell, the standard user interface for the UNIX operating system; UNIX systems still require it for booting. This robust shell provided pipelines and conditional and recursive command execution. It was developed by C programmers for C programmers.

Oddly, however, despite being written by and for C programmers, the Bourne shell didn’t have a C-like coding style. This lack of similarity to the C language drove the invention of the C shell, which introduced more C-like programming structures. While the C shell inventors were building a better mousetrap, they decided to add command-line editing and command aliasing (defining command shortcuts), which eased the bane of every UNIX user’s existence: typing. The less a UNIX user has to type to get results, the better.

Although most UNIX users liked the C shell, learning a completely new shell was a challenge for some. So, the Korn shell was invented, which added a number of the C shell features to the Bourne shell. Because the Korn shell is a commercially licensed product, the open source software movement needed a shell for Linux and FreeBSD. The collaborative result was the Bourne Again shell, or Bash, invented by the Free Software Foundation.

Throughout the evolution of UNIX and the birth of Linux and FreeBSD, other operating systems were introduced along with their own shells. Digital Equipment Corporation (DEC) introduced Virtual Memory System (VMS) to compete with UNIX on its VAX systems. VMS had a shell called Digital Command Language (DCL) with a verbose syntax, unlike that of its UNIX counterparts. Also, unlike its UNIX counterparts, it wasn’t case sensitive, nor did it provide pipelines.

Somewhere along the way, the PC was born. IBM took the PC to the business market, and Apple rebranded roughly the same hardware technology and focused on consumers. Microsoft made DOS run on the IBM PC, acting as both kernel and shell and including some features of other shells. (The pipeline syntax was inspired by UNIX shells.)

Following DOS was Windows, which went from application to operating system quickly. Windows introduced a GUI shell, which has become the basis for Microsoft shells ever since. Unfortunately, GUI shells are notoriously difficult to script, so Windows provided a DOSShell-like environment. It was improved with a new executable, cmd.exe instead of command.com, and a more robust set of command-line editing features. Regrettably, this change also meant that shell scripts in Windows had to be written in the DOSShell syntax for collecting and executing command groupings.

Over time, Microsoft realized its folly and decided systems administrators should have better ways to manage Windows systems. Windows Script Host (WSH) was introduced in Windows 98, providing a native scripting solution with access to the underpinnings of Windows. It was a library that allowed scripting languages to use Windows in a powerful and efficient manner. WSH is not its own language, however, so a WSH-compliant scripting language was required to take advantage of it, such as JScript, VBScript, Perl, Python, Kixstart, or Object REXX. Some of these languages are quite powerful in performing complex processing, so WSH seemed like a blessing to Windows systems administrators.

However, the rejoicing was short-lived because there was no guarantee that the WSHcompliant scripting language you chose would be readily available or a viable option for everyone. The lack of a standard language and environment for writing scripts made it difficult for users and administrators to incorporate automation by using WSH. The only way to be sure the scripting language or WSH version would be compatible on the system being managed was to use a native scripting language, which meant using DOSShell and enduring the problems that accompanied it. In addition, WSH opened a large attack vector for malicious code to run on Windows systems. This vulnerability gave rise to a stream of viruses, worms, and other malicious programs that have wreaked havoc on computer systems, thanks to WSH’s focus on automation without user intervention.

The end result was that systems administrators viewed WSH as both a blessing and a curse. Although WSH presented a good object model and access to a number of automation interfaces, it wasn’t a shell. It required using Wscript.exe and Cscript.exe, scripts had to be written in a compatible scripting language, and its attack vulnerabilities posed a security challenge. Clearly, a different approach was needed for systems management; over time, Microsoft reached the same conclusion.

Introduction to PowerShell

The introduction of WSH as a standard in the Windows operating system offered a robust alternative to DOSShell scripting. Unfortunately, WSH presented a number of challenges, discussed in the preceding section. Furthermore, WSH didn’t offer the CLI shell experience that UNIX and Linux administrators had enjoyed for years, resulting in Windows administrators being made fun of by the other chaps for the lack of a CLI shell and its benefits.

Luckily, Jeffrey Snover (the architect of PowerShell) and others on the PowerShell team realized that Windows needed a strong, secure, and robust CLI shell for systems management. Enter PowerShell. PowerShell was designed as a shell with full access to the underpinnings of Windows via the .NET Framework, Component Object Model (COM) objects, and other methods. It also provided an execution environment that’s familiar, easy, and secure. PowerShell is aptly named, as it puts the power into the Windows shell. For users wanting to automate their Windows systems, the introduction of PowerShell was exciting because it combined “the power of WSH with the warm-fuzzy familiarity of a CLI shell.”

PowerShell provides a powerful native scripting language, so scripts can be ported to all Windows systems without worrying about whether a particular language interpreter is installed. In the past, an administrator might have gone through the rigmarole of scripting a solution with WSH in Perl, Python, VBScript, JScript, or another language, only to find that the next system that they worked on didn’t have that interpreter installed. At home, users can put whatever they want on their systems and maintain them however they see fit, but in a workplace, that option isn’t always viable. PowerShell solves that problem by removing the need for nonnative interpreters. It also solves the problem of wading through websites to find command-line equivalents for simple GUI shell operations and coding them into .cmd files. Last, PowerShell addresses the WSH security problem by providing a platform for secure Windows scripting. It focuses on security features such as script signing, lack of executable extensions, and execution policies (which are restricted by default).

For anyone who needs to automate administration tasks on a Windows system or a Microsoft platform, PowerShell provides a much-needed injection of power. As such, for Windows systems administrators or scripters, becoming a PowerShell expert is highly recommended. After all, PowerShell can now be used to efficiently automate management tasks for Windows, Active Directory, Terminal Services, SQL Server, Exchange Server, Internet Information Services (IIS), and even a number of different third-party products.

As such, PowerShell is the approach Microsoft had been seeking as the automation and management interface for their products. Thus, PowerShell is now the endorsed solution for the management of Windows-based systems and server products. Over time, PowerShell could even possibly replace the current management interfaces, such as cmd.exe, WSH, CLI tools, and so on, while becoming even further integrated into the Windows operating system. The trend toward this direction can be seen with the release of Windows Server 2008 R2 and Windows 7, in which PowerShell is part of the operating system.

PowerShell Uses

In Windows, an administrator can complete a number of tasks using PowerShell. The following list is a sampling of these tasks:

  • Manage the file system — To create, delete, modify, and set permissions for files and folders.
  • Manage services — To list, stop, start, restart, and even modify services.
  • Manage processes — To list (monitor), stop, and start processes.
  • Manage the Registry — To list (monitor), stop, and start processes.
  • Use Windows Management Instrumentation (WMI) — To manage not only Windows, but also other platforms such as IIS and Terminal Services.
  • Use existing Component Object Model (COM) objects — To complete a wide range of automation tasks.
  • Manage a number of Windows roles and features — To add or remove roles and features.

PowerShell Features

PowerShell is a departure from the current management interfaces in Windows. As such, it has been built from the ground up to include a number of features that make CLI and script-based administration easier. Some of PowerShell’s more key features are as follows:

  • It has 240 built-in command-line tools (referred to as cmdlets).
  • The scripting language is designed to be readable and easy to use.
  • PowerShell supports existing scripts, command-line tools, and automation interfaces, such as WMI, ADSI, .NET Framework, ActiveX Data Objects (ADO), and so on.
  • It follows a strict naming convention for commands based on a verb-noun format.
  • It supports a number of different Windows operating systems: Windows XP SP2 or later, Windows Server 2003 SP1 or later, Windows Vista, Windows Server 2008, and now Windows Server 2008 R2 and Windows 7.
  • It provides direct “access to and navigation of” the Windows Registry, certificate store, and file system using a common set of commands.
  • PowerShell is object based, which allows data (objects) to be piped between commands.
  • It is extensible, which allows third parties (as noted earlier) to build upon and extend PowerShell’s already rich interfaces for managing Windows and other Microsoft platforms.

PowerShell 2.0 Enhancements

Windows Server 2008 R2 has the Windows PowerShell 2.0 version built in to the operating system. In this version of PowerShell, a number of enhancements have been made to both PowerShell itself and the ability for managing Windows Server 2008 R2’s roles and features. The following is a summary for some of the improvements in PowerShell 2.0 (these features are talked about in greater detail later in this chapter and throughout this book):

  • The number of built-in cmdlets has nearly doubled from 130 to 240.
  • PowerShell 2.0 now includes the ability to manage a number of roles and features such as the Active Directory Domain Services, Active Directory Rights Management Services, AppLocker, Background Intelligent Transfer Service [BITS], Best Practices Analyzer, Failover Clustering [WSFC], Group Policy, Internet Information Services [IIS], Network Load Balancing [NLB], Remote Desktop Services [RDS], Server Manager, Server Migration, and Windows Diagnostics roles and features.
  • PowerShell 2.0 also includes the introduction of the Windows PowerShell debugger. Using this feature, an administrator can identify errors or inefficiencies in scripts, functions, commands, and expressions while they are being executed through a set of debugging cmdlets or the Integrated Scripting Environment (ISE).
  • The PowerShell Integrated Scripting Environment (ISE) is a multi-tabbed GUI-based PowerShell development interface. Using the ISE, an administrator can write, test, and debug scripts. The ISE includes such features as multiline editing, tab completion, syntax coloring, selective execution, context-sensitive help, and support for right-to-left languages.
  • Background jobs enable administrators to execute commands and scripts asynchronously.
  • Also through the inclusion of script functions, administrators can now create their own cmdlets without having to write and compile the cmdlet using a managed-code language like C#.
  • PowerShell 2.0 also includes a new powerful feature, called modules, which allows packages of cmdlets, providers, functions, variables, and aliases to be bundled and then easily shared with others.
  • The lack of remote command support has also been addressed in PowerShell 2.0 with the introduction of remoting. This feature enables an administrator to automate the management of many remote systems through a single PowerShell console.

However, with all of these features, the most important advancement that is found in PowerShell 2.0 is the focus on what is called the Universal Code Execution model. The core concept in this model is flexibility over how expressions, commands, and scriptblocks are executed across one or more machines.

Understanding the PowerShell Basics

To begin working with PowerShell, some of the basics like accessing PowerShell, working from the command-line interface, and understanding the basic commands are covered in this section of the book.

Accessing PowerShell

After logging in to your Windows interactive session, there are several methods to access and use PowerShell. The first method is from the Start menu, as shown in the following steps:

  1. Click Start, All Programs, Accessories, Windows PowerShell.
  2. Choose either Windows PowerShell (x86) or Windows PowerShell.

To use the second method, follow these steps:

  1. Click Start.
  2. Type PowerShell in the Search Programs and Files text box and press Enter.

Both these methods open the PowerShell console, whereas the third method launches PowerShell from a cmd command prompt:

  1. Click Start, Run.
  2. Type cmd and click OK to open a cmd command prompt.
  3. At the command prompt, type powershell and press Enter.

Command-Line Interface (CLI)

The syntax for using PowerShell from the CLI is similar to the syntax for other CLI shells. The fundamental component of a PowerShell command is, of course, the name of the command to be executed. In addition, the command can be made more specific by using parameters and arguments for parameters. Therefore, a PowerShell command can have the following formats:

  • [command name]
  • [command name] -[parameter]
  • [command name] -[parameter] -[parameter] [argument1]
  • [command name] -[parameter] -[parameter] [argument1],[argument2]

When using PowerShell, a parameter is a variable that can be accepted by a command, script, or function. An argument is a value assigned to a parameter. Although these terms are often used interchangeably, remembering these definitions is helpful when discussing their use in PowerShell.

Navigating the CLI

As with all CLI-based shells, an understanding is needed in how to effectively navigate and use the PowerShell CLI. Table 21.1 lists the editing operations associated with various keys when using the PowerShell console.

TABLE 21.1 PowerShell Console Editing Features

Keys Editing Operation
Left and right arrows Move the cursor left and right through the current command line.
Up and down arrows Moves up and down through the list of recently typed commands.
Left and right arrows Move the cursor left and right through the current command line.
PgUp Displays the first command in the command history.
PgDn Displays the last command in the command history.
Home Moves the cursor to the beginning of the command line.
End Moves the cursor to the end of the command line.
Insert Switches between insert and overstrike text-entry modes.
Delete Deletes the character at the current cursor position.
Backspace Deletes the character immediately preceding the current cursor position.
F3 Displays the previous command.
F4 Deletes up to the specified number of characters from the current cursor.
F5 Moves backward through the command history.
F7 Displays a list of recently typed commands in a pop-up window in the command shell. Use the up and down arrows to select a previously typed command, and then press Enter to execute the selected command.
F8 Moves backward through the command history with commands that match the text that has been entered at the command prompt.
F9 Prompts for a command number and executes the specified command from the command history (command numbers refer to the F7 command list).
Tab Auto-completes command-line sequences. Use the Shift+Tab sequence to move backward through a list of potential matches.

Luckily, most of the features in Table 21.1 are native to the cmd command prompt, which makes PowerShell adoption easier for administrators already familiar with the Windows command line. The only major difference is that the Tab key auto-completion is enhanced in PowerShell beyond what’s available with the cmd command prompt.

As with the cmd command prompt, PowerShell performs auto-completion for file and directory names. So, if you enter a partial file or directory name and press Tab, PowerShell returns the first matching file or directory name in the current directory. Pressing Tab again returns a second possible match and enables you to cycle through the list of results. Like the cmd command prompt, PowerShell’s Tab key auto-completion can also autocomplete with wildcards. The difference between Tab key auto-completion in cmd and PowerShell is that PowerShell can auto-complete commands. For example, you can enter a partial command name and press the Tab key, and PowerShell steps through a list of possible command matches.

PowerShell can also auto-complete parameter names associated with a particular command. Simply enter a command and partial parameter name and press the Tab key, and PowerShell cycles through the parameters for the command that has been specified. This method also works for variables associated with a command. In addition, PowerShell performs auto-completion for methods and properties of variables and objects.

Command Types

When a command is executed in PowerShell, the command interpreter looks at the command name to figure out what task to perform. This process includes determining the type of command and how to process that command. There are four types of PowerShell commands: cmdlets, shell function commands, script commands, and native commands.

cmdlet

The first command type is a cmdlet (pronounced “command-let”), which is similar to the built-in commands in other CLI-based shells. The difference is that cmdlets are implemented by using .NET classes compiled into a dynamic link library (DLL) and loaded into PowerShell at runtime. This difference means there’s no fixed class of built-in cmdlets; anyone can use the PowerShell Software Developers Kit (SDK) to write a custom cmdlet, thus extending PowerShell’s functionality.

A cmdlet is always named as a verb and noun pair separated by a “-” (hyphen). The verb specifies the action the cmdlet performs, and the noun specifies the object being operated on. An example of a cmdlet being executed is shown as follows:

PS C:> Get-Process

Handles NPM(K) PM(K) WS(K) VM (M) CPU(s) Id ProcessName<.tt>
425 5 1608 1736 90 3.09 428 csrss
79 4 1292 540 86 1.00 468 csrss
193 4 2540 6528 94 2.16 2316 csrss
66 3 1128 3736 34 0.06 3192 dwm
412 11 13636 20832 125 3.52 1408 explorer

...

While executing cmdlets in PowerShell, you should take a couple of considerations into account. Overall, PowerShell was created such that it is both forgiving and easy when it comes to syntax. In addition, PowerShell also always attempts to fill in the blanks for a user. Examples of this are illustrated in the following items:

  • Cmdlets are always structured in a nonplural verb-noun format.
  • Parameters and arguments are positional: Get-Process winword.
  • Many arguments can use wildcards: Get-Process w*.
  • Partial parameter names are also allowed: Get-Process --P w*.

Note:
When executed, a cmdlet only processes a single record at a time.

Functions

The next type of command is a function. These commands provide a way to assign a name to a list of commands. Functions are similar to subroutines and procedures in other programming languages. The main difference between a script and a function is that a new instance of the shell is started for each shell script, and functions run in the current instance of the same shell.

Note:
Functions defined at the command line remain in effect only during the current PowerShell session. They are also local in scope and don't apply to new PowerShell sessions.

Although a function defined at the command line is a useful way to create a series of commands dynamically in the PowerShell environment, these functions reside only in memory and are erased when PowerShell is closed and restarted. Therefore, although creating complex functions dynamically is possible, writing these functions as script commands might be more practical. An example of a shell function command is as follows:

PS C: > function showFiles {Get-ChildItem}
PS C: > showfiles

Directory: Microsoft.PowerShell.CoreFileSystem::C:

Mode LastWriteTime Length Name
d---- 9/4/2007 10:36 PM inetpub
d---- 4/17/2007 11:02 PM PerfLogs
d-r-- 9/5/2007 12:19 AM Program Files
d-r-- 9/5/2007 11:01 PM Users
d---- 9/14/2007 11:42 PM Windows
-a--- 3/26/2007 8:43 PM 24 autoexec.bat
-ar-s 8/13/2007 11:57 PM 8192 BOOTSECT.BAK
-a--- 3/26/2007 8:43 PM 10 config.sys

Advanced Functions

Advanced functions are a new feature that was introduced in PowerShell v2.0. The basic premise behind advanced functions is to enable administrators and developers access to the same type of functionality as a compiled cmdlet, but directly through the PowerShell scripting language. An example of an advanced function is as follows:

function SuperFunction { 
       >#
      .SYNOPSIS
                   Superduper Advanced Function.
      .DESCRIPTION
                   This is my Superduper Advanced Function.
       PARAMETER Message
                   Message to write.
#>
       param(
                  [Parameter(Position=0, Mandatory=$True, ValueFromPipeline=$True)]
                                        [String] $Message
                  )
       Write-Host $Message
}

In the previous example, you will see that one of the major identifying aspects of an advanced function is the use of the CmdletBinding attribute. Usage of this attribute in an advanced function allows PowerShell to bind the parameters in the same manner that it binds parameters in a compiled cmdlet. For the SuperFunction example, CmdletBinding is used to define the $Message parameter with position 0, as mandatory, and is able to accept values from the pipeline. For example, the following shows the SuperFunction being executed, which then prompts for a message string. That message string is then written to the console:

PS C:Userstyson> SuperFunction

cmdlet SuperFunction at command pipeline position 1
Supply values for the following parameters:
Message: yo!
yo!

Finally, advanced functions can also use all of the methods and properties of the PSCmdlet class, for example:

  • Usage of all the input processing methods (Begin, Process, and End)
  • Usage of the ShouldProcess and ShouldContinue methods, which can be used to get user feedback before performing an action
  • Usage of the ThrowTerminatingError method, which can be used to generate error records
  • Usage of a various number of Write methods

Scripts

Scripts, the third command type, are PowerShell commands stored in a .ps1 file. The main difference from functions is that scripts are stored on disk and can be accessed any time, unlike functions that don't persist across PowerShell sessions.

Scripts can be run in a PowerShell session or at the cmd command prompt. To run a script in a PowerShell session, type the script name without the extension. The script name can be followed by any parameters. The shell then executes the first .ps1 file matching the typed name in any of the paths located in the PowerShell $ENV:PATH variable.

To run a PowerShell script from a cmd command prompt, first use the CD command to change to the directory where the script is located. Then run the PowerShell executable with the command parameter and specifying which script to be run, as shown here:

C:Scripts>powershell -command .myscript.ps1

If you don't want to change to the script's directory with the cd command, you can also run it by using an absolute path, as shown in this example:

C:>powershell -command C:Scriptsmyscript.ps1

An important detail about scripts in PowerShell concerns their default security restrictions. By default, scripts are not enabled to run as a method of protection against malicious scripts. You can control this policy with the Set-ExecutionPolicy cmdlet, which is explained later in this chapter.

Native Commands

The last type of command, a native command, consists of external programs that the operating system can run. Because a new process must be created to run native commands, they are less efficient than other types of PowerShell commands. Native commands also have their own parameters for processing commands, which are usually different from PowerShell parameters.

.NET Framework Integration

Most shells operate in a text-based environment, which means you typically have to manipulate the output for automation purposes. For example, if you need to pipe data from one command to the next, the output from the first command usually must be reformatted to meet the second command's requirements. Although this method has worked for years, dealing with text-based data can be difficult and frustrating.

Often, a lot of work is necessary to transform text data into a usable format. Microsoft has set out to change the standard with PowerShell, however. Instead of transporting data as plain text, PowerShell retrieves data in the form of .NET Framework objects, which makes it possible for commands (or cmdlets) to access object properties and methods directly. This change has simplified shell use. Instead of modifying text data, you can just refer to the required data by name. Similarly, instead of writing code to transform data into a usable format, you can simply refer to objects and manipulate them as needed.

Reflection

Reflection is a feature in the .NET Framework that enables developers to examine objects and retrieve their supported methods, properties, fields, and so on. Because PowerShell is built on the .NET Framework, it provides this feature, too, with the Get-Member cmdlet. This cmdlet analyzes an object or collection of objects you pass to it via the pipeline. For example, the following command analyzes the objects returned from the Get-Process cmdlet and displays their associated properties and methods:

PS C:> get-process | get-member

Developers often refer to this process as "interrogating" an object. This method of accessing
and retrieving information about an object can be very useful in understanding its methods
and properties without referring to MSDN documentation or searching the Internet.

Extended Type System (ETS)

You might think that scripting in PowerShell is typeless because you rarely need to specify the type for a variable. PowerShell is actually type driven, however, because it interfaces with different types of objects from the less-than-perfect .NET to Windows Management Instrumentation (WMI), Component Object Model (COM), ActiveX Data Objects (ADO), Active Directory Service Interfaces (ADSI), Extensible Markup Language (XML), and even custom objects. However, you don't need to be concerned about object types because PowerShell adapts to different object types and displays its interpretation of an object for you.

In a sense, PowerShell tries to provide a common abstraction layer that makes all object interaction consistent, despite the type. This abstraction layer is called the PSObject, a common object used for all object access in PowerShell. It can encapsulate any base object (.NET, custom, and so on), any instance members, and implicit or explicit access to adapted and type-based extended members, depending on the type of base object.

Furthermore, it can state its type and add members dynamically. To do this, PowerShell uses the Extended Type System (ETS), which provides an interface that allows PowerShell cmdlet and script developers to manipulate and change objects as needed.

Note:
When you use the Get-Member cmdlet, the information returned is from PSObject. Sometimes PSObject blocks members, methods, and properties from the original object. If you want to view the blocked information, use the BaseObject property with the PSBase standard name. For example, you could use the $Procs.PSBase | getmember command to view blocked information for the $Procs object collection.

Needless to say, this topic is fairly advanced, as PSBase is hidden from view. The only time you should need to use it is when the PSObject doesn't interpret an object correctly or you're digging around for hidden jewels in PowerShell.

Static Classes and Methods

Certain .NET Framework classes cannot be used to create new objects. For example, if you try to create aSystem.Math typed object using the New-Object cmdlet, the following error occurs:

PS C: > New-Object System.Math
New-Object : Constructor not found. Cannot find an appropriate constructor for type
System.Math.
At line:1 char:11
+ New-Object < < < < System.Math

    + CategoryInfo : ObjectNotFound: (:)[New-Object ], PSArgumentException
    + FullyQualifiedErrorId : CannotFindAppropriateCtor,Microsoft.PowerShell.
Commands.NewObjectCommand

PS C: >

The reason this occurs is because static members are shared across all instances of a class and don't require a typed object to be created before being used. Instead, static members are accessed simply by referring to the class name as if it were the name of the object followed by the static operator (::), as follows:

PS > [System.DirectoryServices.ActiveDirectory.Forest ]::GetCurrentForest()

In the previous example, the DirectoryServices.ActiveDirectory.Forest class is used to retrieve information about the current forest. To complete this task, the class name is enclosed within the two square brackets ([…]). Then, the GetCurrentForest method is invoked by using the static operator (::).

Note:
To retrieve a list of static members for a class, use the Get-Member cmdlet: Get- Member -inputObject ([System.String ]) -Static.

Type Accelerators

A type accelerator is simply an alias for specifying a .NET type. Without a type accelerator, defining a variable type requires entering a fully qualified class name, as shown here:

PS C: > $User = [System.DirectoryServices.DirectoryEntry ]"LDAP:
//CN=Fujio Saitoh,OU=Accounts,OU=Managed Objects,DC=companyabc,DC=com"
PS C: > $User

distinguishedname:{CN=Fujio Saitoh,OU=Accounts,OU=Managed
Objects,DC=companyabc,DC=com}
path : LDAP:
//CN=Fujio Saitoh,OU=Accounts,OU=Managed Objects,DC=companyabc,DC=com

PS C: >

Instead of typing the entire class name, you just use the [ADSI] type accelerator to define the variable type, as in the following example:

PS C: > $User = [ADSI]"LDAP://CN=Fujio Saitoh,OU=Accounts, OU=Managed
Objects,DC=companyabc,DC=com"
PS C: > $User

distinguishedname:{CN=Fujio Saitoh,OU=Accounts,OU=Managed
Objects,DC=companyabc,DC=com}
path : LDAP:
//CN=Fujio Saitoh,OU=Accounts,OU=Managed Objects,DC=companyabc,DC=com

PS C: >

Type accelerators have been included in PowerShell mainly to cut down on the amount of typing to define an object type. However, for some reason, type accelerators aren’t covered in the PowerShell documentation, even though the [WMI], [ADSI], and other common type accelerators are referenced on many web blogs.

Regardless of the lack of documentation, type accelerators are a fairly useful feature of PowerShell. Table 21.2 lists some of the more commonly used type accelerators.

TABLE 21.2 Important Type Accelerators in PowerShell

The Pipeline

In the past, data was transferred from one command to the next by using the pipeline, which makes it possible to string a series of commands together to gather information from a system. However, as mentioned previously, most shells have a major disadvantage: The information gathered from commands is text based. Raw text needs to be parsed (transformed) into a format the next command can understand before being piped.

The point is that although most UNIX and Linux shell commands are powerful, using them can be complicated and frustrating. Because these shells are text based, often commands lack functionality or require using additional commands or tools to perform tasks. To address the differences in text output from shell commands, many utilities and scripting languages have been developed to parse text.

The result of all this parsing is a tree of commands and tools that make working with shells unwieldy and time consuming, which is one reason for the proliferation of management interfaces that rely on GUIs. This trend can be seen among tools Windows administrators use, too; as Microsoft has focused on enhancing the management GUI at the expense of the CLI.

Windows administrators now have access to the same automation capabilities as their UNIX and Linux counterparts. However, PowerShell and its use of objects fill the automation need Windows administrators have had since the days of batch scripting and WSH in a more usable and less parsing-intense manner. To see how the PowerShell pipeline works, take a look at the following PowerShell example:

PS C: > get-process powershell | format-table id –autosize

Id
--
3628

Name Type
Int System.Int32
Long System.Int64
String System.String
Char System.Char
Byte System.Byte
Double System.Double
Decimal System.Decimal
Float System.Float
Single System.Single
Regex System.Text.RegularExpressions.Regex
Array System.Array
Xml System.Xml.XmlDocument
Scriptblock System.Management.Automation.ScriptBlock
Switch System.Management.Automation.SwitchParameter
Hashtable System.Collections.Hashtable
Type System.Type
Ref System.Management.Automation.PSReference
Psobject System.Management.Automation.PSObject
pscustomobject System.Management.Automation.PSCustomObject
Psmoduleinfo System.Management.Automation.PSModuleInfo
Powershell System.Management.Automation.PowerShell
runspacefactory System.Management.Automation.Runspaces.RunspaceFactory
Runspace System.Management.Automation.Runspaces.Runspace
Ipaddress System.Net.IPAddress
Wmi System.Management.ManagementObject
Wmisearcher System.Management.ManagementObjectSearcher
Wmiclass System.Management.ManagementClass
Adsi System.DirectoryServices.DirectoryEntry
Adsisearcher System.DirectoryServices.DirectorySearcher

PS C: >

Note:
All pipelines end with the Out-Default cmdlet. This cmdlet selects a set of properties and their values and then displays those values in a list or table.

Modules and Snap-Ins

One of the main design goals behind PowerShell was to make extending the default functionality in PowerShell and sharing those extensions easy enough that anyone could do it. In PowerShell 1.0, part of this design goal was realized through the use of snap-ins.

PowerShell snap-ins (PSSnapins) are dynamic-link library (DLL) files that can be used to provide access to additional cmdlets or providers. By default, a number of PSSnapins are loaded into every PowerShell session. These default sets of PSSnapins contain the built-in cmdlets and providers that are used by PowerShell. You can display a list of these cmdlets by entering the command Get-PSSnapin at the PowerShell command prompt, as follows:

PS C: > get-pssnapin

Name : Microsoft.PowerShell.Core
PSVersion : 2.0
Description : This Windows PowerShell snap-in contains Windows PowerShell management cmdlets used to manage components of Windows PowerShell.

Name : Microsoft.PowerShell.Host
PSVersion : 2.0
Description : This Windows PowerShell snap-in contains cmdlets used by the Windows
PowerShell host.
...

PS C: >

In theory, PowerShell snap-ins were a great way to share and reuse a set of cmdlets and providers. However, snap-ins by definition must be written and then compiled, which often placed snap-in creation out of reach for many IT professionals. Additionally, snapins can conflict, which meant that attempting to run a set of snap-ins within the same PowerShell session might not always be feasible.

That is why in PowerShell 2.0, the product team decided to introduce a new feature, called modules, which are designed to make extending PowerShell and sharing those extensions significantly easier. In its simplest form, a module is just a collection of items that can be used in a PowerShell session. These items can be cmdlets, providers, functions, aliases, utilities, and so on. The intent with modules, however, was to allow “anyone” (developers and administrators) to take and bundle together a collection of items. These items can then be executed in a self-contained context, which will not affect the state outside of the module, thus increasing portability when being shared across disparate environments.

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

Installing an SSH Server in Windows Server 2008

Posted by Alin D on September 29, 2010

There are a number of command line options available to configure Window Server 2008 over the network. For example, Windows Powershell, ServerManager.exe, or a telnet server. However, the tried and true method that has worked so well with just about every type of infrastructure device in use today (including Windows Server 2008, Cisco Routers, Linux servers, and more) is SSH. In this article, learn how to install a SSH Server in Windows Server 2008.

SSH is the secure shell, a standard defined in RFC 4251. It is a network protocol that opens up a secure channel between two devices using TCP port 22. This channel can also be used for SFTP and SCP (secure FTP and secure copy, respectively). To make this work, you need a secure server on the system you are connecting to and a secure client on the client you are connecting from.

Keep in mind that SSH is completely interoperable between different platforms. For example, you could connect to a SSH server on a Cisco router from a Windows client, you could connect to a Linux server from a Cisco router, and you could connect to a Windows 2008 Server from a Linux client.

The only possible compatibility issue is that there are two versions of SSH, SSH version 1 and SSH version 2. You should make sure that the server and client support the same versions so that you know which version you are using when you connect. Usually, this version can be negotiated.

While none of the Windows operating systems come with a SSH Server or Client, they are very easy to install.

By having a SSH Server on your Windows 2008 Server, you can:

Remotely access the command line of your Windows 2008 Server
Control the Server over the network, even if you cannot access the GUI interface
Remotely manage your Windows 2008 Server from any device that has a SSH Client
Do all this over an encrypted connection that could even securely traverse the Internet

SSH Server options available for Windows 2008 Server

There are a number of SSH Server options available for Windows Server 2008. Here are just some of the few that I ran across:

SSH.com – Free non-commercial SSH Server
SSH.com – SSH Tectia Client and Server (commercial)
OpenSSH – see article on how to install openssh server in Vista (applies to Windows Server 2008)
Van Dyke – vShell 3.0 Server (commercial)
Free SSHd
WinSSHd (commercial)
Kpym Telnet/SSH Server
copSSH for Windows (a modified build of OpenSSH)
Sysax Multi-Server (SSH Server) for Windows
Once you have your SSH Server running, you will most likely need a SSH Client for Windows. Here are a couple of the most popular SSH Clients for Windows that I have found:

PuTTY
Van Dyke – SecureCRT (commercial)

Install of FreeSSHd – SSH Server in Windows Server 2008

Because the installation for Free SSHd is so simple as compared to others (especially as compared to OpenSSH in Windows), I have chosen to demonstrate how to install and use Free SSHd. Remember that FreeSSHd is totally free (as the name says) both for personal / non-commercial use but also for commercial use.

To start this process, I downloaded FreeSSHd.exe on my Windows Server 2008 system and ran the downloaded program. The graphical installation began.

I took all the defaults for the installation options and clicked Install to being the install.

When done, I opted not to run SSHd as a service but that may be what you want to do on your production server.

Figure 1: Do you want to run FreeSSHd as a service?

By running FreeSSHd as service, it would be available no matter if you were logged into the console or not. I also chose to create private keys for the SSH server.

Next, I ran the FreeSSHd shortcut on the desktop in order to configure and start the SSH server.

Figure 2: Running the FreeSSH Application

I could see that the SSHd server was already running.

The FreeSSHd application can offer the following:

Both SSH Server and Telnet Server capabilities
Options to run SSHd on only certain interfaces
Multiple methods of authentication, including integrated NTLM authentication to Windows AD
Multiple methods of encryption including AES 128, AES 256, 3DES, Blowfish, and more
Options to bring up a secure tunnel upon connection
Optional Secure FTP (sFTP) – for secure FTP, see the FreeFTPd website
The ability to administer users and restrict access to secure shell, secure tunnel, or secure FTP
Ability to allow access to only certain hosts or subnets
Ability to log all connections and commands performed through FreeSSHd
View currently connected users
Update FreeSSHd automatically
For me to be able to login, I had to do two things:

Add a new user account and allow SSH command line access
Open an exception in my Windows Server 2008 Firewall
To add a new user, I went to the Users tab and clicked Add.

I opted to set up a login for my local Windows administrator account. I set the authorization to NTLM. That way, there was no local password in the FreeSSHd database and if the administrator password changes in the local Windows account database, you don’t have to change the password in the FreeSSHd account database.

I authorized this new administrator SSH user to log in with SSH only.


Figure 3: Adding a SSHd user account with NTLM authorization

Here are the results:

Figure 4: A new SSHd user account added

The second thing I had to do to allow me to login was to open an exception in the Windows Firewall. While I could disable the Windows Firewall completely instead of opening the ports, of course the most secure option is to leave the firewall up and allow for an exception for SSH – TCP port 22.

To do that, I went to Start -> Administrative Tools -> Windows Firewall with Advanced Security.


Figure 5: Opening Windows Firewall with Advanced Security

Next, I clicked on Inbound Rules, then on New Rule.


Figure 6: Adding a new Inbound Rule

Next, I chose to add a Port rule.


Figure 7: Choosing to add a Rule for a Port

I specified TCP port 22 only.


Figure 8: Specifying TCP port 22 only

Take the defaults to Allow the Connection, apply this to All domains, and give the rule a Name of your choice.

Test the Connection

To test the connection, I used SecureCRT from my Windows XP machine to the Windows Server 2008 server, via SSH.

To do this, I connected to the server via the IP address (or domain name). I chose to Accept the server’s certificate and save it.


Figure 9: Connecting via SSH and logging in with your Windows username & password

I logged into the server using the administrator login and password.

And, success! I was able to access the server via SSH!


Figure 10: A successful connection to the Windows 2008 Server via SSH

In Summary
SSH is an excellent tool for Windows Server 2008 administrators to consider for remote server management. In this article, you learned how SSH can help you, the options available for SSH Server and SSH Client installations, and how to install one of those options, FreeSSHd.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »