Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘server products’

New Features in IIS 8

Posted by Alin D on September 10, 2012

Each new version of Microsoft Internet Information Services is a little like a new installment in a novel series where each book comes several years apart, but proves to be well worth waiting for.
IIS 8, which comes with Windows Server 2012, has new features aimed at those who are putting together large-scale Web hosts. But one of the nice side effects of those big-scale features is how they dial down to smaller hosts and individual servers as well.

CPU throttling: the next generation

IIS 7 has a CPU throttling function that prevents unruly sites from gobbling up too much CPU. Unfortunately, it has an all-or-nothing flavor to it which makes it less useful than it ought to be.

First, when you have throttling set for a site, the only form of throttling available is to kill the site process entirely for a certain length of time. You can set the CPU threshold and kill length, but it means the site is completely disabled for whatever that length of time is. There is no native way to configure IIS to have a site only use 90% of CPU for processor X (or all processors) at any time.

Second, IIS 7’s CPU throttling is bound to a given application pool. This isn’t so bad if you have a separate pool for each website, and that by itself isn’t a bad idea if you have the CPU cores to throw at such a proposition. (Even if you only have one core, it’s still not a bad idea for low-CPU sites.) But if you have multiple sites that share the same application pool, they all go offline if CPU throttling kicks in for only one of those sites.

IIS 8’s solution to all this is to add two new actions to the way CPU throttling works: Throttle and Throttle under load. Throttle caps CPU for a given worker process, and any child processes spawned by that worker as well. Throttle under load allows a site to use as much CPU as is available, but will throttle that process back if it starts competing for CPU with other processes.

This allows throttling to be done without killing the process wholesale, and adds that much more flexibility in multi-tenancy environments. You can run that many more sites side-by-side, with or without setting explicit processor affinities for their worker processes, and not have them stomp all over each other.

Another refinement is the Application Initialization Module, which allows a site to accept requests for pages and respond with a friendly message while the site code itself is still being spun up. This feature can keep people from pounding on their browser’s refresh button when a change to a library forces a recompile.

SSL improvements

I’ve never liked the way IIS has handled SSL. “Clunky” and “cumbersome” are two of the less vitriolic adjectives I’ve used to describe the whole process of adding and managing SSL certificates to IIS. Thankfully, IIS 8 has three major new improvements to its handling of SSL.

Centralized certificate management. IIS 7 forces you to import each certificate into each instance of IIS, which is a headache if you’re managing a whole farm’s worth of servers. IIS 8 lets you create a Central Certificate Store, or CCS. This allows all the certificates needed across your farm to be placed in a single location. The name of the certificate file can be used to automatically map and bind the certificate to the domain in question, and multiple-domain certificates are also supported through this scheme (you just make multiple copies of the certificate and rename it appropriately).

Server Name Indication support (for using SSI with host headers). Not long ago I discovered for myself, the very hard and painful way, how difficult it is to have SSI on a server where multiple sites share a single IP address and use host headers. A new technology named Server Name Indication allows SSI to be used on sites that can only be reached via host headers, but it requires both a server and a client that can support it. IIS 8 fixes the “server” end of the equation, and most recent browsers provide support (with one glaring exception being any version of IE on Windows XP).

Scalability. Thanks to improvements in how certificates are loaded and managed, SSL-enabled sites now scale far more efficiently, and you can support many more of them on the same hardware (up to thousands). On the same note, IIS’s handling of configuration files (*.config) have been reworked for the same kind of scale.

FTP Logon Restrictions and Dynamic IP Restrictions

I have a theory: because Microsoft has had such a brutal trial by fire as far as security goes, they’re being forced to constantly think about new and more proactive ways to make their server products secure. To that end, two new security features in IIS help provide short- and long-term blocking of IP addresses for both the HTTP and FTP services. Granted, nobody uses IP blocking as any kind of permanent solution to a security issue, but such a feature is still useful to have as a stopgap against attacks.

Dynamic IP restrictions allow you to configure IIS to block access from IP addresses that break rules about how many requests they attempt to make in a given period of time, or when using more than a certain number of concurrent requests. What’s more, the denial of the connection can be done via more than just returning the standard 403.6 Forbidden error IIS 7 would use in such circumstances. The server can be set to return a 401 (Unauthorized), 403 (Forbidden), 404 (Not Found), or simply terminate the HTTP connection without even returning an error. A special proxy mode also allows inspection of the HTTP headers for the X-Forwarded-For header, as a way to find the source of traffic that may be forwarded through a proxy.

FTP logon attempt restrictions allow you to lock out people if they try to make multiple failed attempts to log into the FTP server. The lockout period is normally 30 seconds, but it can be set to anything you want, and the number of attempts is also flexible. This works a little like the tarpitting/graylisting systems used to keep spammers from overwhelming mail servers: only those who are clearly trying to barge their way in get stalled.

Multicore scaling and NUMA awareness

Most of the IIS servers I’ve dealt with have been minimal affairs, with a handful of low-traffic sites that share space on a single- or dual-core server. I know full well, though, that some IIS setups are sprawling affairs: dozens of cores or sockets, many gigabytes of RAM, and all sorts of other high-end hardware features to make sysadmins cry with joy.

IIS hasn’t always made the best possible use of some of those high-end features. Multiple cores, for instance: according to Microsoft, one of the problems of adding more cores is that after a while it actually hurts performance in some setups “because the cost of memory synchronization out-weighs the benefits of additional cores.” In other words, the processing power of those extra cores is offset by the overhead required to keep memory synchronized with a given core.

IIS 8 has a new feature to compensate for this problem: Non-Uniform Memory Architecture (NUMA) awareness. NUMA servers dedicate specific areas of physical memory to specific processors, with crossbar or bus systems to allow processors to talk to the memory that’s not “local” to that processor. Both operating systems and software have to be written to take proper advantage of NUMA, but the benefits include being able to do things like hot-swap failing memory modules and, most importantly, not succumb to the ways poor memory architecture can kill performance.

IIS 8 supports NUMA and multicore scaling in several different ways:

Workload partitioning. IIS can pool worker processes by creating the same number of worker processes as there are NUMA nodes, so each process runs on its own node — essentially, the “Web garden” approach. You can also have IIS set up multiple worker processes and have the workloads distributed across each node automatically.

Node optimization. The default method IIS uses for picking a node when a given worker process starts is to choose the one that has the most available memory, since that’s typically going to yield the best results. IIS can also default to letting Windows Server itself make that decision, which is useful if you have other NUMA-aware server-level apps running on the same hardware.

Thread affinity. IIS can use one of two methods to figure out how to pair threads with NUMA nodes. The “soft affinity” method allows a given thread to be switched to another NUMA node if that node has more CPU to spare. The “hard affinity” method picks a node for a thread and leaves it there.
WebSockets

This long-in-development technology is supposed to fix one of the major limitations of HTTP since its inception: you can’t really keep a connection open indefinitely between the client and the server for real-time full-duplex communication. IIS 8 adds WebSocket support, although it has to be installed as part of the “Application Development” package of add-ons when setting up IIS 8 (along with, say, ASP.NET 4.5)

Conclusion

While many of the new IIS 8 features are clearly designed for those hosting whole server farms or massive multi-core setups, there’s a lot here to appeal to folks on other tiers as well. I know that if I ever upgrade the server I’m using to a multicore model—even just 2-4 cores—I’ll have a whole raft of new IIS features I can use to make it all the more worth my investment.

Posted in Windows 2012 | Tagged: , , , , , , , , , , , , | Leave a Comment »

Pass Mcse 70-290 Exam Easily

Posted by Alin D on September 5, 2010

Pass Mcse 70-290 Exam Easily

MCSE 2003 70-290 Certification

Get Certified in Days

According to our survey, over 85% of the candidates acknowledge that they have spent needless time and money before finding the most suitable solution to pass the 70-290 exams. It doesn’t matter if you are just starting out and looking for the most suitable way to get certified, or a skilled technician looking for the most efficient way to get certified, we have the right solution for you.

We provide the following to help you get certified in the most convenient way

24/7, around the clock, consulting service that will assist you, guide you and help you, until you get certified. This price also includes; exam vouchers and all other related expenses. There is no further cost to attain your certification.

Our Guarantee

We will refund any payment that you make, should you for any reason fail to get certified. The refund is an unconditional total refund of any moneys paid.

Why MCSE 2003

MCSE 2003 70-290 Certifications are among the most specialized certifications available today. The MCSE 2003 70-290 Certification give you industry recognition for your expertise for business solutions based on the Microsoft Windows? 2003 platform and Microsoft 2003 server software. Implementation responsibilities include installing, configuring, and troubleshooting network systems. The MCSE 2003 credential is one of the most widely recognized technical certifications in the industry, a credential in high demand. By earning the premier MCSE credential, individuals are demonstrating that they have the skills necessary to lead organizations in the successful design, implementation, and administration of the most advanced Microsoft Windows platform and Microsoft server products.

MCSE 2003 Certification Requirement:

1. Core exams (six exams required)

• Four networking system exams: (four exams required)

Exam 70-290: Managing and Maintaining a Windows Server 2003 Environment.

Exam 70-291: Implementing, Managing, and Maintaining a Microsoft Windows Server 2003 Network Infrastructure.

Exam 70-293: Planning and Maintaining a Windows Server 2003 Network Infrastructure.

Exam 70-294: Planning, Implementing, and Maintaining a Windows Server 2003 Active Directory Infrastructure.

• One client operating system exam: (one exam required)

Exam 70-620: TS: Microsoft Windows Vista, Configuring.

Exam 70-270: Installing, Configuring, and Administering Microsoft Windows XP Professional.

Exam 70-210: Installing, Configuring, and Administering Microsoft Windows 2000 Professional.

• One design exam:

Exam 70-297: Designing a Windows Server 2003 Active Directory and Network Infrastructure.

Exam 70-298: Designing Security for a Windows Server 2003 Network.

2. Elective exams (one exam required)

Exam 70-089: Designing, Implementing, and Managing a Microsoft Systems Management Server 2003 Infrastructure.

Exam 70-227: Installing, Configuring, and Administering Microsoft Internet Security and Acceleration (ISA) Server 2000, Enterprise Edition.

Exam 70-228: Installing, Configuring, and Administering Microsoft SQL Server 2000 Enterprise Edition.

Exam 70-229: Designing and Implementing Databases with Microsoft SQL Server 2000 Enterprise Edition.

Exam 70-235: TS: Developing Business Process and Integration Solutions Using BizTalk Server.

Exam 70-236: TS: Microsoft Exchange Server 2007, Configuring.

Exam 70-262: TS: Microsoft Office Live Communications Server 2005 – Implementing, Managing, and Troubleshooting.

Exam 70-281: Planning, Deploying, and Managing an Enterprise Project Management Solution.

Exam 70-282: Designing, Deploying, and Managing a Network Solution for a Small- and Medium-Sized Business.

Exam 70-284: Implementing and Managing Microsoft Exchange Server 2003.

Exam 70-285: Designing a Microsoft Exchange Server 2003 Organization.

Exam 70-297: Designing a Microsoft Windows Server 2003 Active Directory and Network Infrastructure.

Exam 70-298: Designing Security for a Microsoft Windows Server 2003 Network.

Exam 70-299: Implementing and Administering Security in a Microsoft Windows Server 2003 Network.

Exam 70–301: Managing, Organizing, and Delivering IT Projects by Using Microsoft Solutions Framework 3.0.

Exam 70–350: Implementing Microsoft Internet Security and Acceleration (ISA) Server 2004.

Exam 70–431: TS: Microsoft SQL Server 2005 – Implementation and Maintenance.

Exam 70-445: Microsoft SQL Server 2005 Business Intelligence – Implementation and Maintenance.

Exam 70-500: TS: Microsoft Windows Mobile Designing, Implementing, and Managing.

Exam 70-501: TS: Microsoft Windows Server 2003 Hosted Environments, Configuring, and Managing.

Exam 70-620: TS: Microsoft Windows Vista, Configuring.

Exam 70-624: TS: Deploying and Maintaining Windows Vista Client and 2007 Microsoft Office System Desktops.

Exam 70-630: TS: Microsoft Office SharePoint Server 2007, Configuring.

Exam 70-631: TS: Configuring Microsoft Windows SharePoint Services 3.0.

With rich experience in writing, often in the major websites, newspapers published articles and welcomed by a large number of readers,and articles written by others with a large number of quote.

Posted in Windows 2003 | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »

An Introduction to Windows PowerShell and IIS 7.0

Posted by Alin D on August 19, 2010

About Windows PowerShell

Windows PowerShell is Microsoft’s comprehensive next generation shell environment and scripting language. Consider Windows PowerShell as a dramatic upgrade to the old cmd.exe command shell and .BAT files. Users may ask why there is a new command shell, when Cmd.exe works well–and there is no time to learn yet another scripting language. Windows PowerShell is an improvement over previous Microsoft command line scripting technologies–this means that Windows PowerShell is easier to use for both simple and complex tasks, and is surprisingly easy to learn.

Some of the improvements from Windows PowerShell include:

  • An updated and consistent scripting language
  • Intrinsic regular expression capabilities
  • The ability to call into the .NET Framework, WMI extensions, and the Windows registry 

This section provides concrete examples and highlights a few of Windows PowerShell’s features. In the following sections, it then discusses how Windows PowerShell works together with IIS 7.0.

To get a feel for Windows PowerShell, look at an example. Consider the screen shot in Figure 1.

 

Figure 1 – Windows PowerShell Basics

First, notice that the shell in Windows PowerShell looks like a traditional Windows command prompt. Using Windows PowerShell will quickly feel quite natural after a brief ramp-up period.

The first command is:


PS C:>
set-location Data

 
 

This invokes a Windows PowerShell cmdlet (pronounced “command-let”) to change the current working directory from C: to C:Data. This is the functional equivalent of the old cd (change directory) command. You may note that having to type “set-location” every time you change the current directory is too much typing; this is correct.

Windows PowerShell has an extensive set of shortcut aliases you can use. The set-location cmdlet is aliased to sl (a shortened version of the full cmdlet name), and to cd (the “old” way). This article uses the full version of cmdlet names for improved readability. 

The second command is:


PS C:Data>
get-childitem Pow*

 
 

This lists the contents in the current directory which start with “Pow”. Windows PowerShell is not case sensitive, so you can type Get-ChildItem or GET-ChildItem. This article uses all lower-case. The get-childitem cmdlet is aliased to both dir (for Windows familiarity) and ls (for Unix users), and aliased to gci for ease of typing. 

Next, use the copy-item cmdlet to copy the Windows PowerShell directory, including all sub-directories, to a new directory named PSBackup:


PS C:Data> copy-item ‘PowerShell’ ‘C:DataPSBackup’ -recurse –force

 
 

Then immediately delete the newly created directory and all its contents using the remove-item cmdlet:


PS C:Data> remove-item PSBackup –recurse

 
 

The next command uses the get-content cmdlet to fetch the contents of file Hello.txt and then save those contents to a new HelloCopy.txt file in the current directory by piping (with the ‘|’ character) to the out-file cmdlet:


PS C:Data>
get-content ‘Hello.txt’ | out-file ‘.HelloCopy.txt’

 
 

The next to last command uses get-content to display the contents of the new file:


PS C:Data>
get-content HelloCopy.txt

 
 

The get-content cmdlet is roughly equivalent to the type (Windows) or cat (Unix) commands. Finish the mini-demo by using the sl alias to change the working directory to the root drive:


PS C:Data> sl

 
 

Now, if all there were to Windows PowerShell was performing common file system navigation and manipulation tasks using a new set of commands, there would be no point in reading further. A brief, one-paragraph introduction could lead to that incorrect assumption. However, Windows PowerShell has many advantages over many current shell environments.

To conclude this introductory Windows PowerShell discussion: Windows PowerShell has a learning curve. New technology is useless unless there is a way to quickly learn the technology. Programmers call this process discoverability. Windows PowerShell was designed with excellent discoverability characteristics, making it much easier to learn.

For example, you can get a list of all cmdlets simply by typing get-command at the Windows PowerShell prompt. You can also get detailed information about a particular cmdlet by typing get-help followed by the cmdlet name. Extensive experience teaching Windows PowerShell to engineers and managers shows that most engineers can become adept at using Windows PowerShell with a single day of practice.

Windows PowerShell in a Typical IIS 7.0 Environment

The previous section of this article gave a short, basic overview of Windows PowerShell. However, a real advantage of using Windows PowerShell comes from the Windows PowerShell ability to interact with and manage IIS 7.0. Engineers reading this article might be skeptical of “latest and greatest” claims. This article will shortly show the power behind Windows PowerShell: many IIS 7.0 management tasks can be accomplished using Windows PowerShell commands and scripts as well as the new IIS 7.0 GUI tools.

With IIS 7.0, you now have the ability to perform many management tasks using any of the following: 

  • Graphical user interface (GUI) 
  • Interactive Windows PowerShell commands
  • Windows PowerShell scripts 

Those users with significant experience managing server software using shells and scripts do not need any more motivation; but those used to using strictly GUI tools may ask what is special about being able to manage IIS 7.0 through a command line or a script, when they have managed well enough with MMC. The end of this article points out six significant advantages of managing IIS 7.0 using Windows PowerShell.

The following are some Windows PowerShell examples based on a Web cast featuring Windows PowerShell architect Jeffrey Snover and IIS 7.0 Product Unit Manager Bill Staples. (Find the Web cast at http://channel9.msdn.com/).

Suppose you want to examine IIS-related services running on your computer — a very common task. One “GUI approach” to do this is to launch MMC and then expand the Services and Application category, and then select the Services category. The result looks like the screenshot in Figure 2.

 

Figure 2 – Using MMC to Get Service Information

  
 

Listing Windows services using Windows PowerShell is easy. For example, from the Windows PowerShell prompt, use the get-service cmdlet:


PS C:>
get-service

 
 

This is not very compelling, but suppose you want to list only services that begin with the letter ‘w’ and sort them by status. One way to do this is:


PS C:>
get-service -include w* | sort-object –property status

 
 

You can interpret this command as meaning fetch all Windows service information but then filter to include just those services that have a name beginning with the letter ‘W’; then, sort those results according to the service status (running, stopped, paused). The result looks like the screenshot in Figure 3.

 

Figure 3 – Using Windows PowerShell to Get Service Information

As pointed out in the previous section, you can type terse PowerShell commands; the previous command can be shortened to:


PS C:> gsv w* | sort status

 
 

Here, gsv is used, which is an alias for get-service; It takes advantage of the fact that the -include switch is in the first parameter position, using sort which is an alias for the sort-object cmdlet. The -property switch is in the first parameter position. Now suppose that you want to stop the World Wide Web Publishing service. Without PowerShell, you can right-click on the W3SVC service to get its context menu, and click the Stop item. Using Windows PowerShell you can issue the command:


PS C:> stop-service -servicename w3svc

 
 

or, in shortened form:


PS C:> spsv w3svc

 
 

Another common task is examining the processes running on a machine. At this point, you can predict how to do this using Windows PowerShell — Windows PowerShell’s consistent and logical cmdlet naming scheme makes guessing commands easy rather than frustrating:


PS C:>
get-process

 
 

Suppose you want to view running processes sorted by the number of handles owned by each process:


PS C:>
get-process | sort-object –property handles

You can get this information as easily using the GUI-based Windows Task Manager. But consider what these three Windows PowerShell commands  do:


PS C:> $p =
get-process
PS C:> $result = $p | measure-object –
property handles -sum -average -max
PS C:> $result | out-
file ‘.ProcessHandleStats.txt’

The first command, $p = get-process, fetches all the information about processes currently running on the host machine, and stores that information into variable $p.

The second command, $result = $p | measure-object -property handles -sum -max, sends the captured process information to the measure-object cmdlet which computes the sum, average, and max value for all the handles in use by the currently running processes, and stores that information into variable $result. If you examined $result at this point, you see something like:

Count    : 54
Average  : 273.148148148148
Sum      : 14750
Maximum  : 1625
Minimum  :
Property : Handles

Notice that in this example there are a total of 54 processes running, and a total of 14,750 handles in use, which is an average of about 273 handles per process. The largest number of handles used by a process is 1625 handles.

The third line, $results | out-file ‘.ProcessHandleStats.txt’, saves the results to a text file. Experienced Windows PowerShell users would likely combine the three commands just described into a single command like:


PS C:> gps | measure-object handles -sum -average -max |
        out-
file ‘.ProcessHandleStats.txt’

One characteristic of the Windows PowerShell architecture is that Windows PowerShell is easily extensible at all levels by both users and third party companies. Windows PowerShell extensibility is a topic in its own right, and here is but one example.

In the Web cast demonstration referenced at the end of this article, Jeffrey Snover and Bill Staples demonstrate a remarkable Windows PowerShell visualization output cmdlet developed by a third party company. This cmdlet is named out-gauge. Notice the semantic similarity to the intrinsic out-file cmdlet. Instead of sending output to a file as out-file does, out-gauge sends output to a visually rich set of controls. For example, one of the commands demonstrated in the Web cast is:


PS C:>
get-process |
        measure-object handles -sum |
        out-gauge -value sum -refresh 0:0:1 -float -type

This command produces a floating-on-the-screen, digital-style gauge which displays the total number of handles in use in real time, and updates the display every second. All this demonstrates that a wide range of useful, Windows PowerShell-based tools will be soon available.

Next, look at an example of IIS 7.0 Web site deployment using Windows PowerShell. Because previous versions of IIS store configuration in the metabase, copying a Web site from one machine to another is not possible. IIS 7.0 makes Web site deployment a simple matter of copying files. Consider this Windows PowerShell script:


# file: Deploy-Application.ps1

$sourceMachine = “DemoServer1
$farmList =
get-content ‘.RestOfFarm.txt’
$filesToCopy =
get-content ‘.AppManifest.txt’

foreach ($targetMachine in $farmList)
{
  
foreach ($file in $filesToCopy)
   {
      $sourcePath = “
\” + (join-path $sourceMachine $file)
      $destPath   = “
\” + (join-path $targetMachine $file)
      write-host –
for yellow “$targetMachine : Copying files from
        $sourcePath

      copy-item $sourcePath $destPath
        -recurse -force
   }
}

 

This script is complex but instructive. When saved as file Deploy-Application.ps1 and then executed from a Windows PowerShell command line, it looks like this:


PS C:> .Deploy-
Application.ps1

 
 

The net effect is to copy all files listed in file AppManifest.txt, located on machine DemoServer1, to all the machines listed in file RestOfFarm.txt. One feature of Windows PowerShell is that well-written scripts are easy to understand, relative to the alternatives such as VBScript or Perl. The script uses the get-content cmdlet to read machine names from file RestOfFarm.txt and file names from file AppManifest.txt.

The foreach loop may be new to you. The outer loop iterates through each machine name stored in variable $farmList, storing each name into variable $targetMachine in turn. The inner loop is similar and stores each file into $file in turn. The join-path cmdlet is used to intelligently concatenate strings to produce complete source and destination paths.

Finally, the copy-item cmdlet performs the copy actions, where the -recurse switch copies all sub-directories and the -force switch causes existing files to be overwritten. Notice that this script has all the information about source and destination locations hard-coded into it. Windows PowerShell has excellent parameter passing capabilities, so this example script could be parameterized to accept information from the command line–however, that topic is outside the scope of this article.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »