Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘Driver’

Use Event Tracing for Windows to extract data from Windows crush dump

Posted by Alin D on August 30, 2012

Troubleshooting Windows Server hangs might be one of the toughest challenges a system administrator faces. When a server starts to hang, things can quickly go from bad to worse. Often, it is too late to set up counter logs to diagnose the problem in Microsoft’s Performance Monitor, more commonly referred to as Perfmon, or to use Task Manager to catch the culprit in the act. The server seems to freeze without any sign of what caused the problem, and you hit the reset button praying it will reboot.

Sound familiar?

What if, just like an airplane’s flight recorder, also known as the black box, you could replay the last few seconds of the server’s performance just prior to the lock-up?

This article describes how to use two of my favorite troubleshooting techniques, namely crash dump analysis and Event Tracing for Windows (ETW), to determine what caused your server to hang.

Event Trace Sessions

The secret is the built-in Event Trace Sessions that Windows has provided since Vista and Windows Server 2008. One of these trace sessions is known as the Circular Kernel Context Logger, or CKCL for short. It provides a 2 MB circular buffer that continually tracks kernel performance statistics in memory.

It is possible to extract this buffer from a forced memory dump and reveal the last few seconds of kernel activity. Extracting the buffer extends the usefulness of a crash dump and provides a snapshot of the server at the time of the hang that includes a history of the last few seconds.

To enable the CKCL, you must select the kernel providers you want included in your trace. This can be accomplished by starting Computer Management or Perfmon to display Data Collector Sets, as seen below in Figure 1. You will then find Startup Event Trace Sessions, which lists the built-in event trace sessions, including the CKCL.

Next, you need to display the properties for the CKCL trace session by double-clicking it or right-clicking to select properties. On the Trace Providers tab, highlight the property called Keywords(Any) and click Edit… to select the providers you want to trace (e.g., process, thread, file).

Perfmon Tacking Fig 1

 

Finally, on the Trace Session tab, select the Enabled checkbox.

Once you acknowledge the changes, you can right-click the CKCL trace session to select Start As Event Trace Session. This will start the CKCL trace session and list it under Event Trace Sessions, along with the other built-in sessions, all of which show a status of Running.

To automate the process of enabling and starting the CKCL after a reboot, you can use the following example Logman command in a script with the Task Scheduler. Use the Task Scheduler’s Actions tab to specify the script and the Triggers tab to specify on startup:

Logman start “Circular Kernel Context Logger” –p “Circular Kernel Session Provider” (process,thread,img,file,driver) -ets

That’s it. All you need to do now is sit back and wait for the next hang to occur. When it does, use the appropriate keystroke combinations (right Ctrl+ScrollLock twice) or NMI mechanism to manually force a system memory dump. Once the system reboots, you will be able to use the Windows debugger to analyze the memory dump.

Extracting performance data from memory dumps

The magical debugger extension that allows you to extract the Event Tracing for Windows performance data from the dump is called !wmitrace. There are two commands you’ll need to know:

Perfmon Tacking Fig 2

!wmitrace.strdump

!wmitrace.logsave [logger ID] [save location].etl

The first command, !wmitrace.strdump, is used to display all of the Event Trace Sessions running at the time of the forced memory dump. You will see the Circular Kernel Context Logger in addition to several others, each containing a “logger ID” to distinguish it from the rest. As you can see in Figure 2, the !wmitrace.strdump command reveals the CKCL has a logger ID of 0x02.

Perfmon Tacking Fig 3

The command !wmitrace.logsave is then used to extract the ETW performance data from the specified session. In our example, the appropriate command to extract the CKCL buffers into an event trace log (ETL) file would be, as seen in Figure 3:

!wmitrace.logsave  2  c:ckcl.etl

Once the performance data has been extracted, you can immediately leverage the Windows Performance Analyzer (WPA) or XPerf to study the data. As you can see below in Figure 4, WPA reveals potential disk and file utilization issues right before the hang:

Perfmon Tacking Fig 4

Summary

Figuring out what caused a Windows server to hang can be a daunting task. But with the right tools and techniques, you can leverage ETW and the Windows Debugger to extract kernel performance data from system memory dumps. You can then use WPA or XPerf to analyze the performance data to determine what led up to the server hang. Keep in mind that while this article uses the CKCL trace session in the examples, you can create your own ETW trace session with WPR or XPerf specifying additional providers and logging options.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

How to troubleshoot your hardest Windows Crashes

Posted by Alin D on July 14, 2011

Crash, boom, bang! Your Windows server just experienced a Blue Screen of Death (BSOD) and your helpdesk is being flooded with calls. The server is rebooting, but this is the fourth crash you’ve encountered this week and users are becoming unruly. To top it off, you now face spending hours on the phone, being passed around the world, with each vendor pointing to the other as the culprit.

It’s time to take matters into your own hands. With a basic knowledge of crash dump analysis, and a few simple commands, you can determine which driver is involved. Then, by intelligently searching the Internet you can potentially locate a hotfix or workaround to resolve the crashes.

This post will cover the tools and steps you’ll need to tackle some of the toughest Windows server outages.

To begin with, the diagram in next image provides an overview of what happens when a crash occurs. As you can see, when the server crashes it writes the contents of physical memory (RAM) to the pagefile on the system partition. On reboot, the pagefile is written to the memory.dmp file, which also resides on the system partition. Finally, after the server reboots, you can then use the Windows kernel debugger (WinDbg) with Microsoft’s symbol server to analyze the crash.

Three main areas need to be addressed to facilitate your crash dump analysis. First, the server must be configured to generate a crash when an unexpected condition or exception occurs. Next, you need to download the Windows debugger from Microsoft and set up the symbol server path. Finally, use the debugger to analyze the crash with a few simple commands. Now, let’s take a closer look at each area.

Configuring the dump

To configure your server to generate a crash, use the Control Panel | System applet | Advanced tab | Startup | Recovery settings shown in next image. You can choose from three types of memory dump files: small, kernel or complete. By default, Windows will produce a small, “mini-dump” file when the server crashes. This may sometimes contain enough debugging information, but typically a kernel memory dump file is required. In rare circumstances, it may be necessary to configure a complete memory dump to capture the required debugging information. Please see Microsoft KB article 254649 for additional information on configuring memory dump files.

Installing the Windows debugger

The next step is to install the Windows kernel debugger tool, which can be downloaded for free from Microsoft. There are three versions of the debugger (x86, x64 and IA64), depending on the architecture of the server where you plan to analyze the crash. Once WinDbg is installed, you must establish the symbol path to translate memory locations into meaningful references to functions or variables used by Windows. The typical symbol path used is SRV*c:symbols*http://msdl.microsoft.com/download/symbols. See Microsoft KB 311503 for details on establishing your debugger’s symbol path.

Analyzing the crash

Now that you have configured the server to generate a memory dump and installed the debugger with the correct symbol path, you are ready to analyze a crash. There are two ways to start up the debugger: from the program group “Debugging Tools for Windows” or from the DOS prompt with the WinDbg command. From within the debugger, use the File pull-down menu to “Open crash dump…” and point the debugger to your dump file.

When the dump file loads, you will notice the debugger’s screen is divided into two regions: the output pane that occupies the majority of the window and the command prompt at the bottom. The first command to use is:

!analyze –v

This command will perform a preliminary analysis of the dump and provide you with a best guess as to which driver caused the crash. The first thing the command shows you is the bug check type (also known as a stop code) and the arguments. The bug check type is very important and should be included with your query when you search the Internet for possible causes and fixes. As we see in the following example, WinDbg displays the bug check type as an LM_SERVER_INTERNAL_ERROR (stop code 54). In this case, if you searched the Microsoft website for LM_SERVER_INTERNAL_ERROR, you would find the known issue and hotfix documented in Microsoft KB 912947. Even the first argument matches the KB article.

3: kd> !analyze -v ***************************************************** *                Bugcheck Analysis                  * *****************************************************

LM_SERVER_INTERNAL_ERROR (54)
Arguments:
Arg1: 00361595
Arg2: e8aab501
Arg3: 00000000
Arg4: 00000000

The !analyze –v command goes on to list which driver caused the crash. In our example, WinDbg accurately calls out the srv.sys driver that caused the crash:

Probably caused by: srv.sys (srv!SrvVerifyDeviceStackSize+78 )

Several other useful commands provide more information about the crash, including:

  • !thread – lists the currently executing thread
  • kv – displays the stack trace indicating which drivers and functions were called
  • lm t n – displays the list of installed drivers and their dates

Finally, you should be aware that the Windows debugger’s online help is excellent. In particular, you can look up the stop code for the crash and use the online help to recommend how to troubleshoot the issue. To find the list of stop codes, go to the Help pull-down menu and select Contents | Debugging Techniques | Bug Checks (Blue Screens) | Bug Check Code Reference. Then scan down the list to locate your stop code.

Many people think debugging a crash is better left for those with Ph.D.’s, but with a basic understanding and a few simple commands, anyone can get a leg up on identifying what is contributing to or causing a server crash. It is likely that someone else out there has already experienced the same crash, so a thorough Internet search will probably lead to potential workarounds or patches for the issue.

Troubleshooting Windows print spooler crashes

With the vast variety of printers and drivers on the market today, it’s a daunting task to determine which one caused your print spooler to crash or hang. Hundreds of users can be affected by a single rogue print driver that seldom leaves any clues as to the cause. This article will tackle how you can determine which print driver caused your spooler to crash.

Overview

The process of troubleshooting a print spooler crash is very similar to troubleshooting a system crash, as discussed in part one of this series. A print spooler, however, may not generate a crash dump on its own, so a tool called ADPlus is used to capture the memory dump. ADPlus is a VB script that can be downloaded for free from Microsoft as part of the Debugging Tools for Windows. Once you install the debugging tools, you will find ADPlus.vbs in the following folder:

Program FilesDebugging Tools for Windows

ADPlus can be used in two modes depending on whether your print spooler is hanging or crashing. In hang mode, ADPlus forces a process dump on an application, or in this case, a print spooler. The dump contains all of the threads associated with the process in addition to the various DLLs and print drivers involved. A few simple debugger commands allow you to determine which printer is being accessed by the spooler and its corresponding driver.

In crash mode, ADPlus will monitor a process and capture its memory dump when it experiences an unhandled condition. The main difference between the two modes is that crash mode must be established prior to the process terminating, whereas hang mode is used at the moment the process locks up. In either mode, only the process you are debugging is affected; the rest of the processes and the operating system continue without downtime.

Once a process dump is captured, you can then use the Windows Debugger (Windbg) to analyze the failure. As discussed in part one, the debugger can also be downloaded for free from Microsoft as part of the Debugging Tools for Windows.

In the following sections, we’ll take a closer look at the steps required to capture a spooler dump, determine which print driver is the culprit and ultimately repair the problem.

Crash mode

As mentioned above, ADPlus crash mode captures a process memory dump when your print spooler is intermittently terminating. Crash mode must be established prior to the problem that is causing the print spooler failure. The very first time

you use ADPlus you must establish cscript as the default script interpreter. To accomplish this, open a command prompt and change your default to the Debugging Tools for Windows folder. Then execute the ADPlus.vbs script without any options:

C:Program FilesDebugging Tools for Windows > ADPlus.vbs

You only need to perform this step once; you are then ready to use ADPlus to capture a spooler crash. Here we see the ADPlus syntax used to set up crash mode detection on the print spooler process:

Adplus –crash –pn spoolsv.exe

This command will attach the console debugger (cdb.exe) to the print spooler process and minimize the window. Once an unexpected condition is encountered, the debugger will produce a process memory dump and terminate the process. By default, the dump is written to a subfolder in the Debugging Tools for Windows folder. You can then use the Windows Kernel Debugger to analyze the resulting dump file.

Hang mode

In hang mode, use ADPlus to force a process memory dump when a print spooler either stops responding or becomes 100% compute-bound. This is evident when users complain that their jobs aren’t printing even though the spooler process still exists. After forcing the process memory dump, ADPlus hang mode will resume the process instead of terminating it like in crash mode. Here we see the ADPlus syntax used to force a process crash with hang mode:

Adplus –hang –pn spoolsv.exe

Analyzing the dump

Once the process dump file has been obtained, use the Windbg tool to analyze the print spooler failure. After installing Windbg, the first step to using the tool is to establish the debugger’s symbol path to point to the Microsoft Symbol Server. Next, open the crash dump file with Windbg using the File pull-down menu, Open Crash Dump…, and then issue the command:

!analyze –v

This command will perform a preliminary analysis of the dump and provide a best guess as to what caused the failure. The kv command will display the stack trace showing you which drivers or DLLs are involved. A stack trace is read from the bottom up so the top of the stack is the most recently executed function. In the following example, we see a stack trace illustrating a spooler failure caused by the ABCdriver:

Another useful command is !peb, which allows you to see all of the drivers and DLLs associated with the print spooler process. The command displays the process environment block as we see in the following example. Much of the output has been omitted […] as it goes on for several pages:

Finally, to determine the printer and job that is being accessed at the time of the failure, use the !teb command. That will display the thread environment block that provides the stack base and limit. You can then display the stack contents with the dc command to reveal the printer that is causing the problem. You will have to scroll through several pages of output, but you will eventually recognize the printer, job and port number in ASCII text to the right:

In this case, the printer name is PRINTER1, the job number is 203, and the port number is 04. The stack contents also contain the associated driver name if you look closely. Once you know the printer and the driver, you can contact the appropriate vendor to determine if an updated driver is available that resolves your issue.

As you can see, troubleshooting a print spooler failure is straightforward once you become familiar with the tools. Starting with ADPlus to capture the dump, then using Windbg to analyze it, and finally leveraging the Web to intelligently search for similar crash footprints will lead you to your solution. Taking matters into your own hands will save you time, money and keep your users happy.

 How to find Windows memory leaks

As we continue our series on tackling the toughest Windows server outages, the time has come to explore the different tools and techniques used to track down Windows memory leaks.

As you may know, memory leaks are caused by poorly written applications or drivers that allocate memory and then subsequently fail to de-allocate all of it. After time, this can lead to the depletion of system memory pools (paged or non-paged) causing the server to eventually hang.

Long before a Windows server hangs though, there are typically other symptoms of a memory leak. The main things to watch out for are entries in the system event log from the server service (SRV component). In particular, be on the lookout for:

Event ID 2019: The server was unable to allocate from the system nonpaged pool because the pool was empty

 or

Event ID 2020: The server was unable to allocate from the system paged pool because the pool was empty

These two events are indicative of a Windows memory leak and need to be investigated immediately. Other signs of a memory leak include excessive pagefile utilization and diminishing available memory.

Perfmon

The first tool typically used to diagnose memory leaks is Perfmon, a graphical tool built into Windows. By collecting performance metrics on the appropriate counters, you can determine whether the memory leak is being caused by a user process (application) or a kernel mode driver. The performance metrics can be collected in the background with the counters being written to a log file. The log file can subsequently be read by Perfmon or the Performance Analysis of Logs (PAL) from CodePlex. Microsoft KB article 811237 explains how to setup Perfmon to log performance counters. There is also a free tool called PerfWiz from Microsoft which provides a wizard to help setup Perfmon logging.

If you suspect a user mode application is leaking memory, you can use Perfmon to collect the Process object counters, Pool Paged Bytes and Pool Nonpaged Bytes for all instances. This will display whether any processes continue to allocate paged or non-paged pool, without subsequently de-allocating it. If you suspect a kernel mode driver is leaking memory, use Perfmon to collect the Memory object counters, Pool Nonpaged Bytes and Pool Paged Bytes.

In the following example, Perfmon is being used to monitor performance counters for the memory object, namely paged and non-paged pool. By right-clicking each counter, you can adjust the scale to

have both counters appear on the same graph. As you can see in following image, the Pool Paged Bytes counter (red line) continues to grow without decreasing, meaning it is leaking memory. Looking at the minimum value for the paged pool counter, it appears it has gone from a value of 118 MB to a maximum value of over 350 MB.

So at this point in our example, we know we have a paged pool leak. We can then use Perfmon to examine the Process object for Pool Paged Bytes. If no processes show a corresponding increase in paged pool usage, we can conclude that a driver or kernel mode code is leaking memory.

Poolmon

To further isolate the memory leak, we need to determine which driver is allocating the memory. When drivers allocate memory, they insert a four-character tag into the memory pool data structure to identify which driver allocated it. By examining the various pool allocations, you can determine which drivers are responsible for allocating how much pool. To associate which tags correspond to certain drivers, see Microsoft KB article 298102. You could also install the Debugging Tools for Windows and check the following file:

Program FilesDebugging Tools for WindowsTriagePooltag.txt

The Memory Pool Monitor utility (Poolmon) is a free tool from Microsoft that will watch pool allocations and display the results illustrating the corresponding drivers. In the following example, Poolmon is being used to track the leaking pool tag “Leak” at the top of the list. Poolmon shows the number of allocations, number of frees, the difference, and the number of bytes allocated. Poolmon will also show the name of the driver if it is setup properly.

Here we can see the tag “Leak” belongs to the Notmyfault.sys driver and has over 83 MB of paged pool allocated.

Windbg

If all else fails and your server locks up completely due to a memory leak, you can always force a crash dump and subsequently analyze it . The key things to look for when analyzing the crash with the Windows Kernel Debugger (Windbg) utility are the memory pool usage and which data structures are consuming the pool.

The first command to use in the debugger is !vm 1, as seen in the following example. This command will display the current virtual memory usage, in particular the non-paged and paged pool regions. The debugger will flag any excessive pool usage and any pool allocation failures as shown in next figure. The trick is to compare the usage with the maximum as highlighted in yellow below. If the usage is at or near the maximum, then the server hung because it ran out of pool.

Finally, you can use the debugger to display the paged or non-paged pool data structures with the !poolused command. Various options on the command allow you to specify either paged or non-paged pool and sort the output. In the following example, the !poolused 5 command is used to display the paged pool data structures, sorted in descending order by usage. In next image, you can see the pool structure with the tag “Leak” is consuming the most paged pool (over 115 MB) and is associated with the notmyfault.sys driver.

As you can see, using tools such as Perfmon, PerfWiz, PAL, Poolmon and Windbg, you can monitor the memory leak, determine whether it is paged or non-paged memory, and discover what driver or application is responsible. After that, contacting the software vendor is usually the best option to see if they have an updated driver or image available that resolves the memory leak.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

How to use Xbootmgr to solve Windows boot problems

Posted by Alin D on June 10, 2011

Some of the toughest Windows performance problems to troubleshoot are those dealing with slow boot times.

Is the slow boot being caused by device drivers initializing, or are services taking too long to startup? Is a particular application startup delaying the boot sequence, or are numerous registry lookups causing the sluggish behavior? The answer to these questions can be revealed through the Microsoft Windows Performance Toolkit (WPT).

WPT consists of tools designed for performance analysis, including Xperf, which is used to collect Event Trace Logs (ETL) and subsequently analyze the data to produce graphs and tables. The toolkit also includes Xbootmgr, which lets admins gather boot time statistics and analyze data with Xperf.

Xbootmgr to the rescue
To begin, install the Windows Performance Toolkit. Notice that several tools will also be installed, including Xbootmgr and Xperf, located in this folder:

C:Program FilesMicrosoft Windows Performance Toolkit

From the DOS prompt, administrators can execute the Xbootmgr.exe tool to initiate a reboot and collect ETL data for later analysis. There are several command options to control the reboot and specify what data is to be collected. All of these options are thoroughly documented in the online help file, WindowsPerformanceToolkit.chm. The following is a typical Xbootmgr command:

Xbootmgr –Trace Boot –TraceFlags DIAG+DRIVERS+POWER+REGISTRY

This command will cause the server to reboot — so be ready. After the server comes back up, it will produce an ETL file containing data for the boot process. By default, Xbootmgr will continue to collect data for 120 seconds after logon, but this can be controlled by the –PostBootDelay option. In this example, the following ETL file will be generated:

Boot_DIAG+DRIVERS+POWER+REGISTRY_1.etl

The next step is to use the Xperf tool to analyze the event trace log using this command:

Xperf Boot_DIAG+DRIVERS+POWER+REGISTRY_1.etl

This brings up the Xperf viewer where graphs and tables can help determine why the boot process is delayed. If a newly installed device driver is causing the delays, look at the Xperf driver delays graph, which illustrates the various device drivers and their corresponding delays in milliseconds (msec). For instance, Figure 1 shows drivers such as Storport, EmcpBase and termdd.sys taking much longer than the other driver requests.

 Xperf Driver Delays graph

Xperf Driver Delays graph


Slow boot times can also be caused by lengthy service startups since a service may depend on other services to load before it can. By looking at the Xperf services graph, admins can easily pinpoint if one service is causing delays in the startup of other services.

Another area that can affect the boot time during system startup is registry accesses. Some applications may lock the registry while performing updates, which can stall other applications from starting up. The Xperf registry graph displays the different types of accesses that are occurring within the registry and at what point during startup. Hovering over a particular point in the graph reveals the type of registry access.

Process lifetimes can also reveal whether timely progress is being made during the boot sequence. Next image shows the Xperf process lifetime graph, which illustrates when processes begin and terminate and can be used to determine if particular processes are causing delays when correlated with registry access, CPU usage or disk I/O utilization graphs.

Xperf Process Lifetime graph

Xperf Process Lifetime graph

 

Aside from the different analysis graphs, Xperf also enables admins to overlay one graph over another. Just right-click the graph and specify the desired graph. Next image shows the registry graph overlaid with the process lifetimes graph to determine which processes are responsible for spikes in registry activity.

Overlay feature in Xperf

Overlay feature in Xperf

As you can see, Xperf and Xbootmgr tools can reveal significant information about what happens during the boot process. The graphs are very intuitive to decipher and can point admins in the right direction when trying to determine the cause of a slow server boot. The tools are free from Microsoft as part of the Windows Performance Toolkit.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

Windows 7 and Windows Server 2008 R2 certification awarded to Perle Systems Serial and Parallel Cards

Posted by Alin D on March 3, 2011

windows 2008 r2

Windows 7 and Windows Server 2008 R2 certification awarded to Perle Systems Serial and Parallel Cards

NASHVILLE, TN—October 1, 2009— Perle Systems, the global developer and manufacturer of serial connectivity solutions today announced Windows 7 and Windows Server 2008 R2 certification for their full range of SPEED and UltraPort Serial Cards and SPEED Parallel cards. Perle is the first major serial connectivity company to have a digitally signed Microsoft driver for both 32-bit and 64-bit versions of Windows 7 and Windows Server 2008 R2.  All drivers can be downloaded from Perle’s website.

“Perle Systems continues lead the industry when it comes to support for the widest range of operating systems.” comments Julie McDaniel, Vice President Marketing, Perle Systems. She continues, ”Our users can be confident that our full line of serial and parallel cards will continue to operate on Microsoft’s latest operating systems. This certification and early adoption to a new standard demonstrates Perle’s commitment to customers for long term investment protection and support.”

Microsoft’s Windows 7 and Windows Server 2008 R2 Certification of products is granted after passing a series of rigorous testing. Once a product is certified the company earns the right to use the highly respected Microsoft Windows 7 and Windows Server 2008 R2 logo. The authorized use of this logo is proof that a product or solution has met the stringent criteria set out by Microsoft indicating reliability and technical excellence.

Perle’s Serial and Parallel Card lines enable you to easily add RS232, RS422, RS485 serial or parallel ports to your PC or server. Compatible with PCI, PCI-X or PCI Express bus slots, Perle cards are the only products that support all major operating systems including Windows, Vista, Linux, Solaris, SPARC as well as SCO.

About Perle Systems – http://www.perle.com:
Perle Systems is a leading developer, manufacturer and vendor of high-reliability and richly featured serial to Ethernet networking products. These products are used to connect remote users reliably and securely to central servers for a wide variety of business applications. Product lines include Console Servers for Data Center Management, Terminal Servers, Device Servers, Ethernet I/O and Serial Cards. Perle distinguishes itself through extensive networking technology, depth of experience in major real-world network environments and long-term distribution and VAR channel relationships in major world markets. Perle has offices and representative offices in 11 countries in North America, Europe and Asia and sells its products through distribution and OEM/ODE channels worldwide.

Article from articlesbase.com

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Windbg Minidump Tutorial:Setting up & Reading Minidump Files

Posted by Alin D on December 15, 2010

This is a tutorial on how to set up and read your minidump files when you receive a BSOD (blue screen of death) in the attempts to gain further insight as to the cause of the problem. First thing is first. Download the latest debugging tools from the Microsoft site. Search for “debugging tools microsoft” in Google.

Then go to Start/Start Search. Type i
the command <i>cmd</i>.

Then change directories to:

C:Program FilesDebugging Tools for Windows (x86)

by using the command:

cd c:program filesdebugging tools for windows (x86)

It’s case insensitive when using the <i>cd</i> command.

Then type in:
windbg.exe -z c:windowsminidumpmini061909-01.dmp -c “!analyze -v”

Your minidump file is located at C:WindowsMinidumpMini062009-01.dmp. It’ll be in the form “MiniMMDDYY-01.dmp”.

KERNEL SYMBOLS ARE WRONG. PLEASE FIX SYMBOLS TO DO ANALYSIS

If somewhere in the output of the Bugcheck Analysis you see an error like:

***** Kernel symbols are WRONG. Please fix symbols to do analysis.

Then it’s most likely that you are using previous and incompatible symbols or corrupt files or you don’t have the proper symbols at the specified location when the Windbg program was trying to analyze the minidump file. So what I did was open up the Windbg program located at C:Program FilesDebugging Tools for Windows (x86) (in Vista and I believe it’s the same location for XP).

SETTING THE SYMBOL FILE PATH VIA WINDBG COMMAND LINE:

This is an important step so ensure that your symbol path file is set correctly lest you get the kernel symbols are WRONG error or other types of errors. Now set the Symbol File Path (File/Symbol File Path) to:

SRV*e:symbols*http://msdl.microsoft.com/download/symbols

However, for some reason I found that in order to set the Symbol File Path in the “File/Symbol File Path” field you cannot change it directly with the field of “File/Symbol File Path”. So what I found that you need to change it through the Windbg command window by going to:

“View/Command”

In the bottom of the command window beside the “kd>” prompt type this in:

.sympath SRV*e:symbols*http://msdl.microsoft.com/download/symbols

The part between the two asterisks (*) is where the symbols from Microsoft’s servers will be downloaded to. It’s fairly large (approximately 22MB) so make sure that you have sufficient disk space.

SETTING SYMBOL FILE PATH IN THE ENVIRONMENT VARIABLE:

Alternatively, you can set it in your environment variable either in your system or user environment variable. To do this, click the WINDOWS KEY+e. The WINDOWS KEY is the key to the right of the LEFT CTRL key of the keyboard. This will open up Windows Explorer.

Then click on the “Advanced system settings” at the top left of the window. This step applies to Vista only. For XP users, simply click on the Advanced tab.

Then click on the button “Environment variable” at the bottom of the window.

Then click on the “New” button under System Variables. Again you can create the environment as a user environment variable instead.

In the “Variable Name” type:
_NT_SYMBOL_PATH

In the “Variable Value” type:
symsrv*symsrv.dll*e:symbols*http://msdl.microsoft.com/download/symbols

If you set the symbol file path as a system environment variable I believe you may have to reboot your computer in order for it to take effect.

OUTPUT OF WINDBG COMMAND

So the following is the output for my crash:

Microsoft (R) Windows Debugger Version 6.11.0001.404 X86
Copyright (c) Microsoft Corporation. All rights reserved.

Loading Dump File [c:windowsminidumpmini062609-01.dmp]
Mini Kernel Dump File: Only registers and stack trace are available

Symbol search path is: SRV*e:symbols*http://msdl.microsoft.com/download/symbols;I:symbols
Executable search path is:
Windows Server 2008/Windows Vista Kernel Version 6001 (Service Pack 1) MP (2 procs) Free x86 compatible
Product: WinNt, suite: TerminalServer SingleUserTS Personal
Built by: 6001.18226.x86fre.vistasp1_gdr.090302-1506
Machine Name:
Kernel base = 0x8201d000 PsLoadedModuleList = 0x82134c70
Debug session time: Fri Jun 26 16:25:11.288 2009 (GMT-7)
System Uptime: 0 days 21:39:36.148
Loading Kernel Symbols
………………………………………………………
……………………………………………………….
…………………………………………………..
Loading User Symbols
Loading unloaded module list
……………………….
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

Use !analyze -v to get detailed debugging information.

BugCheck A, {8cb5bcc0, 1b, 1, 820d0c1f}

Unable to load image SystemRootsystem32DRIVERSSymIMv.sys, Win32 error 0n2
*** WARNING: Unable to verify timestamp for SymIMv.sys
*** ERROR: Module load completed but symbols could not be loaded for SymIMv.sys
Unable to load image SystemRootsystem32DRIVERSNETw3v32.sys, Win32 error 0n2
*** WARNING: Unable to verify timestamp for NETw3v32.sys
*** ERROR: Module load completed but symbols could not be loaded for NETw3v32.sys
Processing initial command ‘!analyze -v’
Probably caused by : tdx.sys ( tdx!TdxMessageTlRequestComplete+94 )

Followup: MachineOwner
———

0: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

IRQL_NOT_LESS_OR_EQUAL (a)
An attempt was made to access a pageable (or completely invalid) address at an
interrupt request level (IRQL) that is too high.  This is usually
caused by drivers using improper addresses.
If a kernel debugger is available get the stack backtrace.
Arguments:
Arg1: 8cb5bcc0, memory referenced
Arg2: 0000001b, IRQL
Arg3: 00000001, bitfield :
bit 0 : value 0 = read operation, 1 = write operation
bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status)
Arg4: 820d0c1f, address which referenced memory

Debugging Details:
——————

WRITE_ADDRESS: GetPointerFromAddress: unable to read from 82154868
Unable to read MiSystemVaType memory at 82134420
8cb5bcc0

CURRENT_IRQL:  1b

FAULTING_IP:
nt!KiUnwaitThread+19
820d0c1f 890a            mov     dword ptr [edx],ecx

CUSTOMER_CRASH_COUNT:  1

DEFAULT_BUCKET_ID:  VISTA_DRIVER_FAULT

BUGCHECK_STR:  0xA

PROCESS_NAME:  System

TRAP_FRAME:  821126c4 — (.trap 0xffffffff821126c4)
ErrCode = 00000
002
eax=85c5d4d8 ebx=00000000 ecx=8cb5bcc0 edx=8cb5bcc0 esi=85c5d420 edi=ed9c7048
eip=820d0c1f esp=82112738 ebp=8211274c iopl=0         nv up ei pl nz na pe nc
cs=0008  ss=0010  ds=0023  es=0023  fs=0030  gs=0000             efl=00010206
nt!KiUnwaitThread+0×19:
820d0c1f 890a            mov     dword ptr [edx],ecx  ds:0023:8cb5bcc0=????????
Resetting default scope

LAST_CONTROL_TRANSFER:  from 820d0c1f to 82077d24

STACK_TEXT:
821126c4 820d0c1f badb0d00 8cb5bcc0 87952ed0 nt!KiTrap0E+0x2ac
8211274c 8205f486 00000002 85c5d420 ed9c7048 nt!KiUnwaitThread+0×19
82112770 8205f52a ed9c7048 ed9c7008 00000000 nt!KiInsertQueueApc+0x2a0
82112790 8205742b ed9c7048 00000000 00000000 nt!KeInsertQueueApc+0x4b
821127c8 8f989cd0 e79e1e88 e79e1f70 00000000 nt!IopfCompleteRequest+0×438
821127e0 8a869ce7 00000007 00000000 00000007 tdx!TdxMessageTlRequestComplete+0×94
82112804 8a869d33 e79e1f70 e79e1e88 00000000 tcpip!UdpEndSendMessages+0xfa
8211281c 8a560c7f e79e1e88 00000001 00000000 tcpip!UdpSendMessagesDatagramsComplete+0×22
8211284c 8a86e0ab 00000000 00000000 889a0558 NETIO!NetioDereferenceNetBufferListChain+0xcf
82112860 8a6d341e 878689e8 e79e1e88 00000000 tcpip!FlSendNetBufferListChainComplete+0x1c
82112894 8a6084f1 86c440e8 e79e1e88 00000000 NDIS!ndisMSendCompleteNetBufferListsInternal+0xb8
821128a8 8fe3f0ee 87a092b0 e79e1e88 00000000 NDIS!NdisFSendNetBufferListsComplete+0x1a
821128cc 8a6084f1 87a07230 e79e1e88 00000000 pacer!PcFilterSendNetBufferListsComplete+0xba
821128e0 8fe516f7 88940c10 e79e1e88 00000000 NDIS!NdisFSendNetBufferListsComplete+0x1a
WARNING: Stack unwind information not available. Following frames may be

Posted in TUTORIALS | Tagged: , , , , , | Leave a Comment »

Denormalization in SQL Server for Fun and Profit

Posted by Alin D on December 10, 2010

Almost from birth, database developers are taught that their databases must be normalized.  In many shops, failing to fully normalize can result in anything from public ridicule to exile to the company’s Siberian office.  Rarely discussed are the significant benefits that can accrue from intentionally deformalizing portions of a database schema.  Myths about denormalization abound, such as:

  • A normalized schema is always more stable and maintainable than a denormalized one.
  • The only benefit of denormalization is increased performance.
  • The performance increases from denormalization aren’t worth the drawbacks.

This article will address the first two points (I’ll tackle the final point in the second part of this series).  Other than for increased performance, when might you want to intentionally denormalize your structure?  A primary reason is to “future-proof” your application from changes in business logic that would force significant schema modifications.

Let’s look at a simple example.  You’re designing a database for a pizza store.  Each customer’s order contains one or more pizzas, and each order is assigned to a delivery driver.  In normal form, your schema looks like:

Table: Orders

Customer

Driver

Amount

Table: OrderItems

Order

Pizza Type

Planning Ahead. Let’s say you’ve heard the owner is considering a new delivery model.  To increase customer satisfaction, every pizza will be boxed and sent for delivery the moment it comes out of the oven- even if other pizzas in the order are still baking.

Since you’re a savvy developer, you plan for this and denormalize your data structure.  Thoughtoday, the driver column is functionally dependent only on the order itself, you cross your fingers, take a deep breath, and violate Second Normal Form by placing it in the OrderItems table.  There—you’ve just future-proofed your application.   Orders can now have multiple drivers.

Your denormalization has introduced a small update anomaly (if an order’s driver changes, you have to update multiple rows, rather than just one) but if the probability of the delivery model change is large, this is well worth the cost.   This is typical when denormalizing, but usually it’s a small problem, and one that can be handled automatically via triggers, constraints, or other means..   For instance, in this case, you can create (or modify the existing) update SP for Orders to cascade the change into OrderItems.  Alternatively, you can create an UPDATE trigger on OrderItems that ensures all rows within one order have the same driver.  When the rule changes in the future, just remove the trigger—no need to update your tables or any queries that reference them.

Now let’s consider a slightly more complex (and somewhat more realistic) case.   Imagine an application to manage student and teacher assignments for an elementary school.    A sample schema might be:

Table: Teachers

Teacher (PK)

Classroom

Table: Students

Student (PK)

Teacher (FK)

Planning Ahead. You happen to know that other elementary schools in the region are assigning secondary teachers to some classrooms.  You decide to support this in advance within your schema.  How would you do it via denormalization?  The ugly “repeating groups” solution of adding a “Teacher2” column is one solution, but not one that should appeal to you.    Far better to make the classroom itself the primary key, and move teachers to a child table:

Table: Classrooms

Classroom  (PK)

Teacher  (FK)

Table: Teachers

Teacher  (PK)

Classroom  (FK)

Table: Students

Student  (PK)

Classroom  (FK)

As before, this denormalization creates a problem we need to address.  In the future, the school may support multiple teachers in one classroom, but today that’s an error.   You solve that by the simple expedient of adding a unique constraint on the classroom FK in the teacher’s table.    When the business rule changes in the future, you simply remove the constraint.   Voila!  A far better solution than having to significantly alter your views, queries, and stored procs to conform to a new schema.

Future-proofing an application via denormalization can also involve the removal of tables, or rather reverse-decomposing multiple tables into one.  Imagine a book store chain has individual stores, each of which carries specific genres of books, and handles only specific publishers.  Due to contractual obligations, if a store carries a genre, it must do so for all publishers It deals with.

Fully normalizing this relationship requires three tables.  With sample data, they might appear like this:

Table: StoreGenres

Store

Genre

Books R US

Mystery

Books R US

Science Fiction

Downtown Books

Textbooks

Downtown Books

Horror

Book Nook

Self Help

Table: StorePublishers

Store

Publisher

Books R US

Acme Publishing

Books R US

TGA Distributors

Downtown Books

TGA Distributors

Downtown Books

Arkham House

Book Nook

Acme Publishing

Book Nook

TGA Distributors

Table: PublisherGenres

Publisher

Genre

Acme Publishing

Mystery

Acme Publishing

Self Help

TGA Distributors

Textbooks

TGA Distributors

Mystery

TGA Distributors

Science Fiction

TGA Distributors

Self Help

Arkham House

Horror

The list of genres a store carries from each publisher is generated with the following query:

SELECT sp.Store, sp.Publisher, pg.Genre

FROM StorePublishers sp

JOIN PublisherGenres pg ON sp.Publisher = pg.Publisher

JOIN StoreGenres sg ON sg.Store = sp.Store AND sg.Genre = pg.Genre

Planning Ahead. What happens when the sales contract expires, and stores are free to carry only specific genres from individual publishers?  In that case, the above schema won’t work – if Downtown Books wants to carry mysteries from Acme, they’ll be forced into carrying Acme’s self-help books also.

To handle this contingency, you reverse-decompose the three tables into one:

StorePublisherGenres

Books R US

Acme Publishing

Mystery

Books R US

TGA Distributors

Mystery

Books R US

TGA Distributors

Science Fiction

Downtown Books

TGA Distributors

Textbooks

Downtown Books

Arkham House

Horror

Book Nook

Acme Publishing

Self Help

Book Nook

TGA Distributors

Self Help

Note: this table data is nothing more than the results of the above query.  But by physically materializing it, you allow individual genres from each publisher to be inserted or deleted.  You’ve future-proofed the application to allow each to vary independently.   (You’ve also improved performance somewhat as well—you’ve recast a three-way JOIN into a single table query).

As in the prior examples, your denormalization has created a minor update anomaly.  If Downtown Books adds Acme Publishing to their list of publishers, you have to insert two new records, not one – at least until that contract expires.  Of course, you can always ensure consistency through a trigger on the table or an update stored procedure.   When the contract expires, simply update the trigger, rather than replacing all three tables, and the queries that reference them.

In these simple examples, you could say we’re not really denormalizing as much as we’re simply “pre-normalizing”.   In other words, while we’re technically violating normal form according to  the current business requirements, once the anticipated changes take place, our schema will again be normalized.  However, for more complex cases, this isn’t always true.

To see this, let’s return to our example of teachers and students, with the current business rule that each student is assigned to a single teacher and classroom, and the future possibility that backup teachers will be assigned to each class.   What if there was another possibility- that the school might instead “split” each classroom into two, and assign each subclass its own teacher and students.  Can we create a schema that allows for the possibility of both this and the teacher/backup teacher we already considered?

One possibility is to add a second foreign key into the students table:

Table: Students

Student (PK)

Teacher (Nullable FK)

Classroom( Nullable FK)

One of the FKs relates to teachers, the other to classrooms.  In the case of students assigned to a class with multiple teachers, the second FK is used and the first is left null.   If students are assigned to a teacher in a “subclass”, the reverse is true.

This is just a downright ugly approach and one that leads to some unpleasant UNION queries to splice together both halves of the query chain.   Another method I’ve often seen used is the “dynamic FK” approach:

Table: Students

Student (PK)

ParentKey (FK)

ParentKeyType

Column ParentKey can either be a Teacher Key or a Classroom Key; the column ParentKeyTypecontains a code or other value to indicate which type ParentKey really is.   This is an even uglier approach that will eventually drive you to a dependence on illegal street drugs and an early grave.  Let’s try for something better.

The key here is to more closely model the real situation.  If the school may split classes into subclasses, they’re effectively creating a new “virtual” sub-classroom.  So let’s create a table for that entity, and attach all our other tables to it:

Table: Classrooms

Classroom  (PK)

Teacher  (FK)

Table: Subclasses

Subclass  (PK)

Classroom  (FK)

Table: Teachers

Teacher  (PK)

Subclass  (FK)

Table: Students

Student  (PK)

Subclass  (FK)

How well does this schema handle the current business rules, as well as each of the two possible future variants?

Current Rule: Each teacher assigned to single classroom; each student assigned to single teacher. In this case we create a UNIQUE constraint enforcing one subclass per class, and a second UNIQUE constraint enforcing one teacher per subclass.  To make the model a little easier to use, we might also want a trigger on Classrooms to automatically create and manage the matching subclass row for each class.    Even though students aren’t directly assigned to teachers or classes, the constraints create a 1:1 relationship that allow us to enforce that relationship

Variant #1: Each student assigned to a single classroom; one or more teachers per class. In this case, we drop the UNIQUE constraint on teachers, allowing multiple teachers to be assigned to each subclass.

Variant #2: Each student assigned to a single teacher; one or more teachers per class. Here, we leave the UNIQUE constraint on teachers, but drop the constraint on subclasses, and the classrooms trigger that synchronizes subclasses.    When multiple teachers are assigned to a class, we insert a new subclass row and assign the second teacher to it.  This allows us to enforce the one-teacher-per-student rule, but still allow teachers to share classrooms.

So there you have it: a data model that flexibly handles three entirely different sets of business logic, without being fully normalized for any of them.    You can now rapidly address future requirement changes without having to do more than drop a constraint.

Conclusion

I hope this article has shown you that thoughtful, targeted denormalization can often be a significant benefit to future-proofing your applications.    Developers often plan ahead  by increasing column lengths or including additional ones, but  the changes these prevent are minor compared to the significant restructuring that you can prevent through denormalization.  So plan ahead, and think for the future.

However, the more common reason to denormalize is to increase application performance, not future-proof.    In the second half of this article, I’ll show the magnitude of performance gains you can achieve from denormalization.  They’re larger than you might think.

Posted in SQL | Tagged: , , , , , , , , , , | Leave a Comment »

Stuxnet Worm

Posted by Alin D on October 11, 2010

Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

As the story goes, the Stuxnet worm was designed and released by a government–the U.S. and Israel are the most common suspects–specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines–and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location–and that Stuxnet’s authors knew exactly what they were targeting.

It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

All the anti-virus programs detect and remove Stuxnet from Windows systems.

Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

Over time the attackers swapped out modules that didn’t work and replaced them with new ones–perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup–surely any organization that goes to all this trouble would test the thing before releasing it–and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates–India, Indonesia, and Pakistan–are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the origina caveats.

Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things–large and small–that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days–but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

This essay originally appeared on Forbes.com.

My alternate explanations for Stuxnet were cut from the essay. Here they are:

  • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
  • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
  • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
  • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

Note that some of these alternate explanations overlap.

Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

Source: Here

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Windows User State Virtualization – Mixed Environments

Posted by Alin D on October 7, 2010

Designing a User State Virtualization strategy for a mixed environment poses a number of different challenges. By mixed environment I’m referring to a client computing infrastructure that has:

  • Different versions of Microsoft Windows such as Windows 7, Windows Vista and Windows XP on different computers
  • Different architecture versions of the same version of Windows such as Windows 7 x86 and Windows 7 x64 on different computers
  • Different versions of applications such as Office 2010, Office 2007 and Office 2003 on different computers
  • Different architecture versions of the same application such as Office 2010 x86 and Office 2010 x64 on different computers

This article examines the issues that can arise when planning USV solutions for mixed environments and describes some best practices for designing and implementing such solutions.

Planning USV for Mixed Windows Versions

As described in the first article of this series, Windows Vista introduced a new “v.2” user profile that has a flattened folder structure that separates user data and settings better than the Windows XP user profile did. As a result of this change, older Windows XP user profiles are not compatible with the newer v.2 profiles of Windows Vista. This means that you can’t use Roaming User Profiles (RUP) as a solution for roaming between computers running Windows Vista and Windows XP. If you try to implement RUP in a mixed XP/Vista environment, users who roam between the two OS versions will end up with two separate profiles on the RUP server, one profile for XP computers and the other for Vista computers.

No changes were made to user profiles in Windows 7 and the user profile structure in Windows 7 is identical to that in Windows Vista. This means you can use RUP to enable users to roam between computers running Windows 7 and Windows Vista provided there are no other architecture or application-specific issues as described in the sections below. It also means that you can’t use RUP to roam between Windows 7 and Windows XP computers.

If users do need to roam between computers running Windows XP and computers running later versions of Windows, you can use Folder Redirection (FR) with Offline Files (OF) enabled to redirect Documents and other folders where users store work-related data. This allows user data to be accessible from computers running any version of Windows. You cannot roam user settings however, since user settings resides in both the AppDataRoaming folder and in the Ntuser.dat file (the HKCU registry hive) in the root of the user’s profile. Since RUP cannot be used in this scenario, and since AppDataRoaming should never be redirected unless you also use RUP, this means only user data can be roamed in this scenario, not user settings. Table 1 summarizes a USV strategy for mixed environments running different versions of Windows on different computers.

OS versions RUP FR with OF
XP and Win7 No Yes (data folders only)
XP and Vista No Yes (data folders only)
Vista and Win7 Yes Yes

Table 1: USV strategy for mixed environment having different Windows versions on different computers

If you plan on implementing FR in a mixed XP and Win7 (or mixed XP and Vista) environment and you need to redirect the Pictures, Music or Videos folder, you will need to select the Follow The Documents Folder option on the Target tab of the redirection policy for these folders (see Figure 1). Doing this will cause these folders to be redirected as subfolders of the Documents folders (as in XP) instead of as peers of the Documents folder (as in Vista and later) and causes these folders to inherit their redirection settings from the Documents folder instead of having this configured on the folders themselves. Don’t do this however unless you have users who still need to access their redirected data folders from computers running Windows XP since choosing this option alters the structure of the user’s profile. If users only need to access redirected data from computers running Windows Vista or later then don’t select Follow The Documents Folder when redirecting the Pictures, Music or Videos folders. And in any case, you shouldn’t redirect these particular folders at all unless there is a business need for these folders to be redirected (such as centrally backing up internally developed training videos or in-house developed graphics).


Figure 1: Configuring redirection on Pictures to follow Documents

Alternatively, instead of selecting Follow The Documents Folder individually for the Pictures, Music and Videos folders, you can simply select Also Apply Redirection Policy To Windows 2000, Windows 2000 Server, Windows XP and Windows Server 2003 Operating Systems on the Settings tab as shown in Figure 2 as this has the effect of automatically configuring the Pictures, Music and Videos folders to Follow The Documents Folder.


Figure 2: Enabling this setting causes Pictures, Music and Videos to follow Documents.

Planning USV for Mixed Windows Architectures

Beginning with Windows Vista two hardware architectures have been available for Windows platforms: x86 (32-bit) and x64 (64-bit). An x64 version of Windows XP was also released but was never widely deployed, largely due to lack of device driver support, so we won’t be considering Windows XP x64 in this discussion.

While the underlying user profile folder structure of Windows 7 x86 (or Windows Vista x86) and Windows 7 x64 (or Windows Vista x64) are identical, there are differences in how the Windows registry is structured on x86 and x64 versions of Windows. Specifically, the registry on x64 Windows also contains the x86 registry structure, but the reverse isn’t true—the registry on x86 Windows does not contain any x64 registry structure. Another issue is that the location of some programs are stored in the registry using static paths such as C:Program Files or C:Program Files (x86), and this means when you try roaming between 32-bit and 64-bit machines these registry items will typically cause problems. The result of these differences is that you can’t use RUP to roam users between computers running Windows 7 x86 (or Windows Vista x86) and computers running Windows 7 x64 (or Windows Vista x64).

However, if users do need to roam between computers running x86 and x64 versions of Windows, you can use FR with OF to redirect Documents and other data folders to allow work-related data to be accessible to users from computers running both x86 and x64 versions of Windows. You cannot roam user settings however since user settings in HKCU on a computer running an x64 version of Windows are not compatible with user settings in HKCU on a computer running an x86 version of Windows. Table 2 summarizes a USV strategy for mixed environments running x86 versions of Windows one some computers and x64 versions of Windows on others.

OS architectures RUP FR with OF
Win7 x86 and Win7 x64 No Yes (data folders only)
Vista x86 and Vista x64 No Yes (data folders only)

Table 2: USV strategy for mixed environment having both x86 and x64 versions of Windows on different computers

Planning USV for Mixed Application Versions/Architectures

Issues involving applications in roaming environment are similar to those involving Windows versions. For example, say you have Windows Vista on some computers and Windows 7 on others. You also have version N of an application installed on the Vista machines, but have the newer version N+1 of the same app installed on the Windows 7 machines. If you implement RUP and/or FR/OF in such an environment, can you expect users to experience any problems when they work with this application?

Probably. It’s likely that the new version of the app has more features than the old one, and new features will undoubtedly mean new per-user registry settings and possibly new user settings stored as files under the AppDataRoaming folder. What happens when registry settings or AppDataRoaming files used by the new version of the app are loaded by the old version of the app? Who knows! The only way you can be sure if this scenario will work is to test, test and test before you deploy your USV solution in your production environment. Otherwise, users may find that certain apps they use crash or hang unexpectedly, or behave in strange and unpredictable ways. Such a scenario could even cause users lose data or cause data to be corrupted. It’s best to play it safe and make sure that, regardless of which version of Windows is running on each computer, the same version of each app is installed. Be kind to your helpdesk personnel and don’t let them be inundated with complaints from angry users.

This is even more true with different architecture versions (x86 or x64) of applications. For example, say you have the x64 version of a particular application installed on Windows 7 x64 computers and the x86 version of the same application installed on Windows Vista x64 computers. The OS architectures are both x64 which supports a RUP scenario, but it’s likely that the x86 and x64 versions of the application store their settings in different parts of HKCU and maybe even different folders and files in the AppDataRoaming folder. This means the same kind of frustrating, unpredictable behavior may occur if users try to work on the same data file from one computer running the x86 version of the app and then later on a second computer running the x64 version of the app. Even worse, the data file being worked on might become corrupted. I’m not saying this will happen for sure, and the only way to know for sure is to test, test and test again. But it’s better to play it safe and simply standardize all your computers on either the x86 or x64 version of the app. This may not be a big issue today since 64-bit apps like the 64-bit version of Office 2010 are just now appearing, but in the future it’s likely to be a concern as more and more software vendors start releasing 64-bit versions of apps that had until now only been available in 32-bit form. Table 3 summarizes a USV strategy for mixed environments running different versions/architectures of applications on different computers.

App versions/architectures RUP FR with OF
Multiple different versions of the same app Play it safe—don’t use RUP Yes (data folders only)
Both x86 and x64 versions of the same app Play it safe—don’t use RUP Yes (data folders only)

Table 3: USV strategy for mixed environment having different application versions/architectures on different computers

If there is a clear business need to provide users with multiple versions of applications or even different architecture versions of applications, you should consider implementing one of the following application virtualization solutions from Microsoft (choose the one that meets your need in terms of functionality and manageability):

Conclusion

The bottom line in mixed environments (different versions/architectures of Windows/applications) is to keep things simple and play it safe. Your USV strategy should be to virtualize only user data folders like Documents (and possibly also Desktop, Pictures, etc.) and you should use FR together with OF to make user data available to users from any computer they log on to. Do not try to virtualize user settings using RUP or by redirecting the AppDataRoaming folder. If possible, try and standardize on a single version/architecture of each of your applications.

Posted in TUTORIALS, Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , | Leave a Comment »

Sound Device Drivers For Windows Xp – Secure Web Download !

Posted by Alin D on September 17, 2010

Sound Device Drivers For Windows Xp – Secure Web Download !

I’d like to let you in on a new system that allows you to painlessly get a sound device driver for Windows XP and you won’t even have to search on the net’! You should know that obtaining a driver from web pages that are foreign to you can make your windows vulnerable to undesired threats such as computer viruses. I would be pleased to share the secrets you need to know; you can accomplish this task very easily and avoid trouble.

Click here to get a sound device driver for Windows XP now!

All too commonly, computer users accidentally end up trying to utilize unsuitable driver(s), a scenario which often leads to them having to deal with numerous and varied system troubles. Was it a time-consuming (time-wasting) task when you last attempted to find the recommended driver for a component of your pc? Acting as a ‘communicator’ between windows and a device, the driver is definitely a key ‘ingredient,’ since without it you aren’t able to use your pc’s various components. But the latest news is that there’s a special program on the web that instantly locates, repairs, and updates practically any driver(s) that you might need. You’ll soon see that this specially designed driver-scanner will uncover faulty and/or obsolete drivers and instantly replace them with the most recent ones.

A computer’s drivers are necessary components that require occasional care, in the same way that you (hopefully) ensure your windows system is running at its best. And as an added bonus, this tool goes the extra mile and upgrades the speed and functionality of your pc, while freeing it of troubles brought on by questionable drivers. You should definitely try to discontinue using out-of-date drivers since these can trigger miscellaneous faults and even windows crashes in some cases.

Those who need to get a sound device driver for Windows XP – the method described in this article supplies some real bonuses that weren’t easy to come by (if available at all). Just imagine how many problems down the line that you can steer clear of as your computer’s drivers are automatically updated 24/7. Do you think that with these utilities you’ll be able to handle all of your various driver(s) hassles? Probably not, however, i heartily encourage you to try it the minute you have finished this paragraph. To be sure, we have to take many things into consideration whether we use our computers a lot or a little; it’s clear that the driver update dilemma is worthy of some close examination. It’s likely that you are acquainted with users who are also experiencing assorted driver issues – please take a moment to forward them the following article.

Posted in TUTORIALS | Tagged: , , , | Leave a Comment »

Update A Video Driver For Windows Xp – Secure Web Download !

Posted by Alin D on September 15, 2010

Update A Video Driver For Windows Xp – Secure Web Download !

You’ve come to the right place – i would like to provide you with a simple and reliable method for you to get an updated video driver for Windows XP – easier and faster than you can imagine. You should know that obtaining a driver from anonymous or uncertain websites can make your windows vulnerable to various uninvited menaces, including harmful codes. Take a few moments to look over the tips provided here before you “take the plunge” and modify the drivers in the near future.

Click here to get an updated video driver for Windows XP now!

You’ve likely encountered the common and frustrating case that many of us run into where you need particular drivers and a simple web query gives you more than you asked for, but not what you needed! How long did it take you as you looked for information on the recommended driver required to run something on your computer? Many of you probably don’t understand what a driver does – a driver is basically a software program that has the function to communicate between a hardware component (usually) and the programs that use it. The last time i searched the web for a driver i ran across a driver searcher that quickly tracks down the latest edition of the driver(s) you require, in mere seconds. A tool such as this one will not just track down your desired driver(s) – it even brings up to date your mouse, windows and wireless device(s) driver, and others.

A computer’s drivers are necessary components that require occasional care, not unlike the procedures you follow to manage and safeguard your windows system. Take advantage of these scanners and it’s a very simple process to get whatever drivers are necessary in just a single click and that’s that – what you need, on the spot! I strongly advise you against installing drivers that were downloaded from anonymous sites – this can lead to the hassle of viruses and spyware.

For both speed and safety, most people would appreciate being able to get an updated video driver for Windows XP with such a safe and easy solution rather than by the usual web-surfing. Be sure to remind to yourself that it’s conceivable for one sole unsuitable driver to lead to major hassles and oftentimes end up crashing your operating system. Will it really be able to locate all the drivers you need? Check it out – just download your choice of these programs and see if it helped; if i’m this satisfied, i’m sure you’ll find it convenient, too. There are many aspects that we need to keep in mind over the course of a pc’s life; it’s clear that the driver update dilemma needs our close attention. No doubt within your circle of friends and family are some who face driver problems of any kind, then feel free to forward them the information i’ve provided.

Posted in TUTORIALS | Tagged: , , , , , , , , , | Leave a Comment »