Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘software vendors’

How to spot bad SQL query performance

Posted by Alin D on October 20, 2011

Once you’ve fine-tuned your databases’ indexes, maxed out your hardware and gotten the fastest disks money can buy, you’ll hit a limit on how much performance you can squeeze from your SQL Server machines.

At some point you have to focus on fine-tuning the applications rather than beefing up SQL Server itself. That takes you into the somewhat tricky world of query analysis, where you try to identify database queries that — because of the way they are written –aren’t performing as well as they could be.

SQL Server comes with an excellent tool, SQL Profiler, that’s designed to capture traces.Think of these as similar to a network packet capture: You’re actually capturing the raw queries being fed to SQL Server, along with information about their execution time. Using this raw data, you can spot bad SQL query performance, and then offer advice to the application developers on how to improve them.

Actually improving performance does depend on the ability to change the applications themselves, so this isn’t often an option for prepackaged applications on which you can’t make changes to the source code. Instead, this approach is usually limited to in-house applications. With these, you or someone you work with can get at the code to make alterations and improvements.

When you open SQL Profiler, you’ll start by creating a new trace. Part of the trace definition is a list of the events that you want to capture. You’ll usually want to capture remote procedure call (RPC) events as well as Transact-SQL events, as these two event types represent the two ways queries can be submitted to SQL Server or stored procedures can be executed. I usually include the following event classes in my traces:

  • RPC:Completed. This is generated after a stored procedure is executed from an RPC and includes information about such parameters as the duration of the execution, the CPU utilization and the name of the stored procedure.
  • SP:StmtCompleted. This is fired whenever a statement within a stored procedure finishes running and also includes data on metrics such as execution duration and CPU use.
  • SQL:BatchStarting. You’ll see this whenever Transact-SQL batches begin, including those inside and outside stored procedures.
  • SQL:BatchCompleted. This occurs when a Transact-SQL batch finishes; it provides data similar to the RPC and stored-procedure examples listed above.
  • Showplan XML. This gets you a graphical execution plan for a query—key to understanding how the query was executed and spotting performance problems.

Once your trace is set up, start capturing data. You’ll want to capture representative data, and often that means running the trace during production workloads. Be sure to capture to a file or SQL Server table that’s on a machine other than the one you’re analyzing so that the analysis itself doesn’t affect performance.

You’ll need to tell SQL Profiler which data columns you want to view; I usually start with this list:

  • Duration
  • ObjectName
  • TextData
  • CPU
  • Reads
  • Writes
  • DatabaseName
  • ApplicationName
  • StartTime
  • EndTime
  • EventSequence

These columns give me good insight into how long each query took to run, and I can often just skim through the Duration column looking for especially large values. You’ll want to focus on longer-running queries to see if you can improve execution time. The duration is shown in milliseconds (although it’s stored internally in microseconds), so don’t be alarmed if all of the values seem large at first.

I’ll also scan through the CPU column, since a query can run quickly but consume a lot of CPU time. Heavy-CPU queries will often bog down when the server is especially busy and can’t devote a lot of CPU capacity to them; as a result, rewriting queries so that they’re a bit less CPU-hungry can result in better performance. Profiler lets you create filters, and I’ll often start by creating a filter that hides anything taking less than 5,000 milliseconds, just so I can focus on the longer-running queries.

All in all, there are a number of things to look for, though most of the fixes for these problems will have to be implemented by the application developer:

  • Ad hoc SQL queries that are run outside of a stored procedure. Stored procedures almost always offer better performance because SQL Server can cache their execution plans; ad hoc queries should, whenever feasible, be converted to stored procedures.
  • Long-running or CPU-heavy queries in execution plans. Table scan operations indicate the lack of a suitable index, and putting an index in place to eliminate the table scan can have an immediate and positive effect on performance.
  • Queries that include a large number of joins. Joins take time, and while SQL Server is obviously designed to handle them, a large number of joins can really slow things down. The general rule of thumb I use is seven joins; if you have more than that, you have to start looking at ways to cut back.
  • A slow-running query that always runs slowly. This is a query that could perhaps be rewritten to perform better. A query that runs slowly some of the time is one that’s likely being affected by outside factors, such as locks or resource contention.

SQL query-performance tuning is as much art as science, and really, it belongs to the realm of application developers. The goal for database folks is to identify those slow-running or CPU-intensive queries, gather evidence and then work with developers to find ways of improving them.

Other good tools for lousy queries
Although SQL Profiler is a great tool, it doesn’t actually show you slow queries; you have to examine the data Profiler captures and figure out which queries are “slow” on your own. If query troubleshooting becomes a big part of your daily life, then you might want to move beyond Profiler and into a dedicated query analysis tool.

These tools, written by third-party software vendors, are designed to capture data in much the same way that Profiler does (some actually accept a Profiler capture file as input), and then identify poorly performing queries for you. In many cases, these tools can tell you why a query is performing poorly and even suggest changes that would help improve performance.

Vendors in this space include SQL Sentry, Red Gate, Idera, Quest Software, DBSophic, and more. Look for tools that can interface with, or even completely replace, SQL Profiler, and that offer automated query analysis and prescriptive advice.

If you start talking to vendors about a potential purchase, be sure to ask how much, if any, impact the product will have on SQL query performance when analyzing a production server. Some vendors have techniques to minimize or even eliminate production impact, and that’s always a good thing.

Posted in SQL | Tagged: , , , , , , | Leave a Comment »

Office 2010 can`t be targeted by New Zero-Day Flash Attacks

Posted by Alin D on May 20, 2011

Microsoft weighed in today on the new, targeted zero-day attacks revealed by Adobe this week that hide a Flash Player exploit inside Excel spreadsheet documents — confirming that Office 2010 is safe from the attack due to built-in security mitigation features and offering stopgap protection measures for earlier versions of its software.
Adobe plans to issue a patch next week for the flaw, which affects Adobe Flash Player versions 10.2.152.33 and earlier. According to Microsoft’s analysis of the exploit, the exploit loads shellcode into memory, executes heap-spraying, and then loads the Flash byte stream from memory to exploit the previously unknown CVE-2011-0609 flaw.

“Microsoft is aware of public reports of attacks using Adobe Flash Player. We encourage customers to review Adobe’s advisory. Office 2010 users are not susceptible to the current attacks as they do not bypass Data Execution Prevention (DEP). Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) offers further mitigation for this vulnerability,” says Jerry Bryant, group manager of response communications at Microsoft.

Users of earlier versions of Office should run Microsoft’s EMET, which helps block targeted attacks exploiting unpatched vulnerabilities with mitigations for third-party apps and older Microsoft apps.

“The current attacks do not bypass the Data Execution Prevention security mitigation (DEP). Microsoft Office 2010 turns DEP on for the core Office applications, and this will also protect Flash Player when it is loaded inside an Office application. In addition to that, users of the 64 bit edition of Microsoft Office 2010 have even less exposure to the current attacks as the shellcode for all the exploits we’ve seen will only work on a 32 bit process. What’s more, if an Office document originates from a known unsafe location such as email or the internet, Office 2010 will activate the Protected View feature,” according to a new blog post by Microsoft’s Andrew Roths and Chengyun Chu today.

In its analysis of the zero-day malware, Microsoft found a file that appears to have been used for fuzzing Flash files. “We suspect this vulnerability was found using fuzzing technology from clean Flash files, because we found a file on the Internet that looks like it might have been used for the fuzzing. Through differential analysis between the original clean file and the exploit file, we could confirm the vulnerability,” blog says.

But the Flash-rigged Excel file highlights an underlying problem Microsoft has not directly addressed, security expert say: the fact that software vendors are packing products with excess functionality that only opens the door for abuse.

Roel Schouwenberg, senior antivirus researcher for Kaspersky Lab, says the ability to embed Flash SWF files inside Excel documents really isn’t necessary. “Web browsers all have plug-ins, and it’s common practice to be able to disable plug-ins … I don’t want to see Flash files in Excel. Admins should be able to disable it,” he says. “We as an industry are looking more at ways to reduce the attack surface.”

But Microsoft’s integration among its applications for productivity purposes makes sense, he says. “But Microsoft could look at the Adobe model … allowing admins to blacklist the use of certain features within Reader,” for example, Schouwenberg says. Complexity in software basically causes more security issues, he says.

It’s all about reducing the attack surface, says Brad Arkin, senior director for product security and privacy at Adobe. “If you can reduce the attack surface, hopefully, fewer things will go wrong,” Arkin says.

Meanwhile, Microsoft says there’s also a workaround in Office 2007 to protect against the Flash attacks: Change the setting in the Trusted Center to “disable all controls without notification.”

Posted in Security | Tagged: , , , , , , , , , , , , , | Leave a Comment »

NIS (Network Inspection System) in Threat Management Gateway

Posted by Alin D on February 11, 2011

Network Inspection System (NIS) is the vulnerability signature component of TMG’s Intrusion Prevention System (IPS). NIS is a brand new feature in TMG, and helps prevent zero-day attacks.

This post explains how NIS works. Let’s take a scenario.

  • A vulnerability is detected in a product and disclosed on the internet
  • Software vendors start developing patches for customers affected
    • At the same time, attackers are taking advantage these disclosed vulnerabilities – even before the patch is released for the vulnerability.

Software vendors can take weeks or even a month to develop and release a patch for a disclosed vulnerability. Till then, the vulnerability is out in the open. This means an attacker can compromise the system using the disclosed vulnerability even before the software vendor can develop a patch. This is called a zero-day situation.

How does NIS help in the zero-day situation?

  • NIS is a signature-based IPS. NIS will receive the signatures from the software vendor as soon as a vulnerability is disclosed.
  • While the patches are still being developed, NIS blocks all traffic matching this vulnerability signature, preventing attackers from compromising even unpatched systems.

So, what are the benefits?

  • Closes the ‘vulnerability window’ between vulnerability disclosures and patch deployment from weeks to just a few hours.
  • For Microsoft products that are retired (not supported by Microsoft), new security patches are not developed. As an example, Windows Server 2003 SP1 was retired in April 2009 and when Conflicker emerged, it attacked all unpatched machines – wreaking havoc.
  • NIS signatures for Microsoft products are updated free of charge for all TMG customers.
  • NIS is based on GAPA (General Application-level Protocol Analyzer) by Microsoft Research, and can also be extended to third party products, although at the moment it is protecting only Microsoft products.

How to enable NIS on TMG?

Intrusion prevention

 

  1. On the Forefront TMG console, go to Intrusion Prevention System.
  2. In the Tasks pane, click Configure Properties.

 

  1. Enable the checkbox “Enable NIS”
  2. EnableNIS
  3. You can see the list of signatures. These are updated automatically and free of charge for Microsoft products (does not need a subscription license).
  4. This is what happens when a user tries to browse a website that attempts to attack using a known vulnerability.

error

Posted in Security | Tagged: , | Leave a Comment »

Cheating Email Retention

Posted by Alin D on December 10, 2010

A retention policy determines the length of time emails are to be saved before final deletion. However enforcing the policy without leaving gaps may be trickier than expected. Today we look at how many Organizations fail to apply the policy to all their emails.

Most organizations employ retention policies so as to be compliant with legal requirements. The simple fact that all kind of information concerning an Organization is delivered via email, should be enough to appreciate why legislators demand that this communication channel is correctly managed.

The need to satisfy legal requirements is worrisome to many administrators, once they appreciate the key role they play. IT should be about technology. Instead if lucky, we end up with an attorney advising us on how to manage our servers. Many won’t even have this luxury and are expected to do everything on their own.

At the end of the day being compliant involves enforcing email retention policies. The retention period provides a time window allowing us to go back and recover emails. On one hand the policy defines the length of time during which we can hold on to information without breaching Data Protection and Privacy laws. On the other hand the policy limits our obligation to supply information in case of an investigation or litigation.

It is certainly not my intention to delve into matters such as Data Protection, Privacy and Litigation. My topic today is retention policies. More specifically I want to discuss the scope of such policies and how in some cases emails might be falling out of the policy scope at our own peril.

Which Emails should we retain?

Very often the mailbox store, the location where emails are saved, is considered to be the perfect place to enforce a retention policy. After all, the mailbox is where received emails are deposited and new emails are created. Thus if the policy allows us to recover all emails within a mailbox (including those deleted) we should be fine. In this context, a mailbox database archiving solution is normally considered to be an integral part of email retention. At the same time the database is defining the physical scope of the retention policy.

In this scenario, is the retention policy truly covering all emails? To answer this, we need to better define which emails our server is responsible for. When receiving an email over SMTP we have the option to either accept or reject the email. Rejection is the standard method to refuse responsibility for an email. On the other hand if we accept the email at SMTP transport level we are expected to deliver it to the intended recipient. If we later abandon delivery, the SMTP specifications require us to notify the original sender with an NDR. Thus from a technical point of view we have a clear definition of how an email server accepts and refuses responsibility for email delivery.

With this definition, we are broadening the scope our retention policies are required to cover. To understand this point let’s have a look at this setup:

All Accepted Emails Delivered to the Mailbox Store

The internet facing SMTP transport will immediately decide between accepting and rejecting emails. The Email Hygiene stage could also advise the SMTP Transport, flagging rejection of spam and malware infected emails. If not rejected the email is delivered to the Mailbox Store. This is the ideal scenario; emails are either rejected or delivered. Retention policies enforced at the Mailbox Store correctly cover all emails.

In practice many email servers do not follow the SMTP specifications to the letter. Here is an example:

Accepted Emails Silently Deleted

In this case the Email Hygiene layer, apart from causing email rejection may also silently delete emails. This is a very common option in anti-spam filters where emails are deleted without generating any NDR. The NDR is not generated in order to avoid backscatter problems. So now we have emails that are accepted but never delivered. The email never reaches the mailbox and thus the retention policy is not enforced. Here I am using Email Hygiene as an example. Of course the same is true for any other email server component that may cause silent deletion of emails.

What’s the Problem with Deleting Junk?

As long as our retention policy is only failing to cover Junk emails, silent deletion should not be cause of concern. However do appreciate that here Email Hygiene is only an example. It is up to you to check:

  • if any other software may be causing silent deletion at transport level
  • if the retention policy scope is limited to the mailbox store

Even if we stick to the Email Hygiene example, are we sure only junk and malware is being deleted? No matter what software vendors say, the possibility of false classification exists. This may lead to the loss of legitimate emails. What if the lost email happens to be the one you require during litigation? Shouldn’t this email also be covered by your retention policy?

Some may argue that in practice Email Hygiene applications are very accurate and the chance for something like this to happen is next to nil. The counter argument is that software is made by humans and managed by humans. The possibility of something going wrong should not be underestimated.

You have the possibility of bugs and errors from software vendors. We normally only hear of the most serious bugs concerning the big vendors because of the noise their name guarantees. The fact is that no one is immune to error.

A much more common cause for software to go wrong is incorrect configurations. For example if your software allows for manual email blacklisting, then getting a false classification is just a matter of setting the wrong entry at the blacklist. Just imagine someone erroneously blacklisting a sender when he was instead supposed to whitelist him.

In all cases the point is, why risk when the technology allowing your server to be compliant is available?

Solutions

This type of problem can be dealt with when designing the system and choosing the components that provide additional functionality to the email server.

One obvious solution is to prefer SMTP rejection over silent deletion.  However this is only possible if the email is blocked immediately before it is accepted by the internet facing SMTP transport.

Another option is to employ solutions that provide built in archiving. Email Hygiene solutions such as IMF Tune, today allow you to archive deleted emails. Furthermore once archived emails reach the specified age limit; these may also be purged automatically.

In this manner we can extend the reach of our retention policy. The mailbox store is no longer a physical limit. If our company has a policy that all emails are retained for 60 days minimum, all we need is to provide the necessary disk space for storing 60 days of emails to our Email Hygiene solution.

Final Tips

In the last few years the IT world developed solutions to respond to the legislation requiring Organizations to correctly manage the information on their servers. Today many have absorbed the core concepts and made significant steps towards compliancy. However some are unaware that their solution is less than bullet proof. Luckily the solutions to close the gaps are readily available.

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

Windows User State Virtualization – Mixed Environments

Posted by Alin D on October 7, 2010

Designing a User State Virtualization strategy for a mixed environment poses a number of different challenges. By mixed environment I’m referring to a client computing infrastructure that has:

  • Different versions of Microsoft Windows such as Windows 7, Windows Vista and Windows XP on different computers
  • Different architecture versions of the same version of Windows such as Windows 7 x86 and Windows 7 x64 on different computers
  • Different versions of applications such as Office 2010, Office 2007 and Office 2003 on different computers
  • Different architecture versions of the same application such as Office 2010 x86 and Office 2010 x64 on different computers

This article examines the issues that can arise when planning USV solutions for mixed environments and describes some best practices for designing and implementing such solutions.

Planning USV for Mixed Windows Versions

As described in the first article of this series, Windows Vista introduced a new “v.2” user profile that has a flattened folder structure that separates user data and settings better than the Windows XP user profile did. As a result of this change, older Windows XP user profiles are not compatible with the newer v.2 profiles of Windows Vista. This means that you can’t use Roaming User Profiles (RUP) as a solution for roaming between computers running Windows Vista and Windows XP. If you try to implement RUP in a mixed XP/Vista environment, users who roam between the two OS versions will end up with two separate profiles on the RUP server, one profile for XP computers and the other for Vista computers.

No changes were made to user profiles in Windows 7 and the user profile structure in Windows 7 is identical to that in Windows Vista. This means you can use RUP to enable users to roam between computers running Windows 7 and Windows Vista provided there are no other architecture or application-specific issues as described in the sections below. It also means that you can’t use RUP to roam between Windows 7 and Windows XP computers.

If users do need to roam between computers running Windows XP and computers running later versions of Windows, you can use Folder Redirection (FR) with Offline Files (OF) enabled to redirect Documents and other folders where users store work-related data. This allows user data to be accessible from computers running any version of Windows. You cannot roam user settings however, since user settings resides in both the AppDataRoaming folder and in the Ntuser.dat file (the HKCU registry hive) in the root of the user’s profile. Since RUP cannot be used in this scenario, and since AppDataRoaming should never be redirected unless you also use RUP, this means only user data can be roamed in this scenario, not user settings. Table 1 summarizes a USV strategy for mixed environments running different versions of Windows on different computers.

OS versions RUP FR with OF
XP and Win7 No Yes (data folders only)
XP and Vista No Yes (data folders only)
Vista and Win7 Yes Yes

Table 1: USV strategy for mixed environment having different Windows versions on different computers

If you plan on implementing FR in a mixed XP and Win7 (or mixed XP and Vista) environment and you need to redirect the Pictures, Music or Videos folder, you will need to select the Follow The Documents Folder option on the Target tab of the redirection policy for these folders (see Figure 1). Doing this will cause these folders to be redirected as subfolders of the Documents folders (as in XP) instead of as peers of the Documents folder (as in Vista and later) and causes these folders to inherit their redirection settings from the Documents folder instead of having this configured on the folders themselves. Don’t do this however unless you have users who still need to access their redirected data folders from computers running Windows XP since choosing this option alters the structure of the user’s profile. If users only need to access redirected data from computers running Windows Vista or later then don’t select Follow The Documents Folder when redirecting the Pictures, Music or Videos folders. And in any case, you shouldn’t redirect these particular folders at all unless there is a business need for these folders to be redirected (such as centrally backing up internally developed training videos or in-house developed graphics).


Figure 1: Configuring redirection on Pictures to follow Documents

Alternatively, instead of selecting Follow The Documents Folder individually for the Pictures, Music and Videos folders, you can simply select Also Apply Redirection Policy To Windows 2000, Windows 2000 Server, Windows XP and Windows Server 2003 Operating Systems on the Settings tab as shown in Figure 2 as this has the effect of automatically configuring the Pictures, Music and Videos folders to Follow The Documents Folder.


Figure 2: Enabling this setting causes Pictures, Music and Videos to follow Documents.

Planning USV for Mixed Windows Architectures

Beginning with Windows Vista two hardware architectures have been available for Windows platforms: x86 (32-bit) and x64 (64-bit). An x64 version of Windows XP was also released but was never widely deployed, largely due to lack of device driver support, so we won’t be considering Windows XP x64 in this discussion.

While the underlying user profile folder structure of Windows 7 x86 (or Windows Vista x86) and Windows 7 x64 (or Windows Vista x64) are identical, there are differences in how the Windows registry is structured on x86 and x64 versions of Windows. Specifically, the registry on x64 Windows also contains the x86 registry structure, but the reverse isn’t true—the registry on x86 Windows does not contain any x64 registry structure. Another issue is that the location of some programs are stored in the registry using static paths such as C:Program Files or C:Program Files (x86), and this means when you try roaming between 32-bit and 64-bit machines these registry items will typically cause problems. The result of these differences is that you can’t use RUP to roam users between computers running Windows 7 x86 (or Windows Vista x86) and computers running Windows 7 x64 (or Windows Vista x64).

However, if users do need to roam between computers running x86 and x64 versions of Windows, you can use FR with OF to redirect Documents and other data folders to allow work-related data to be accessible to users from computers running both x86 and x64 versions of Windows. You cannot roam user settings however since user settings in HKCU on a computer running an x64 version of Windows are not compatible with user settings in HKCU on a computer running an x86 version of Windows. Table 2 summarizes a USV strategy for mixed environments running x86 versions of Windows one some computers and x64 versions of Windows on others.

OS architectures RUP FR with OF
Win7 x86 and Win7 x64 No Yes (data folders only)
Vista x86 and Vista x64 No Yes (data folders only)

Table 2: USV strategy for mixed environment having both x86 and x64 versions of Windows on different computers

Planning USV for Mixed Application Versions/Architectures

Issues involving applications in roaming environment are similar to those involving Windows versions. For example, say you have Windows Vista on some computers and Windows 7 on others. You also have version N of an application installed on the Vista machines, but have the newer version N+1 of the same app installed on the Windows 7 machines. If you implement RUP and/or FR/OF in such an environment, can you expect users to experience any problems when they work with this application?

Probably. It’s likely that the new version of the app has more features than the old one, and new features will undoubtedly mean new per-user registry settings and possibly new user settings stored as files under the AppDataRoaming folder. What happens when registry settings or AppDataRoaming files used by the new version of the app are loaded by the old version of the app? Who knows! The only way you can be sure if this scenario will work is to test, test and test before you deploy your USV solution in your production environment. Otherwise, users may find that certain apps they use crash or hang unexpectedly, or behave in strange and unpredictable ways. Such a scenario could even cause users lose data or cause data to be corrupted. It’s best to play it safe and make sure that, regardless of which version of Windows is running on each computer, the same version of each app is installed. Be kind to your helpdesk personnel and don’t let them be inundated with complaints from angry users.

This is even more true with different architecture versions (x86 or x64) of applications. For example, say you have the x64 version of a particular application installed on Windows 7 x64 computers and the x86 version of the same application installed on Windows Vista x64 computers. The OS architectures are both x64 which supports a RUP scenario, but it’s likely that the x86 and x64 versions of the application store their settings in different parts of HKCU and maybe even different folders and files in the AppDataRoaming folder. This means the same kind of frustrating, unpredictable behavior may occur if users try to work on the same data file from one computer running the x86 version of the app and then later on a second computer running the x64 version of the app. Even worse, the data file being worked on might become corrupted. I’m not saying this will happen for sure, and the only way to know for sure is to test, test and test again. But it’s better to play it safe and simply standardize all your computers on either the x86 or x64 version of the app. This may not be a big issue today since 64-bit apps like the 64-bit version of Office 2010 are just now appearing, but in the future it’s likely to be a concern as more and more software vendors start releasing 64-bit versions of apps that had until now only been available in 32-bit form. Table 3 summarizes a USV strategy for mixed environments running different versions/architectures of applications on different computers.

App versions/architectures RUP FR with OF
Multiple different versions of the same app Play it safe—don’t use RUP Yes (data folders only)
Both x86 and x64 versions of the same app Play it safe—don’t use RUP Yes (data folders only)

Table 3: USV strategy for mixed environment having different application versions/architectures on different computers

If there is a clear business need to provide users with multiple versions of applications or even different architecture versions of applications, you should consider implementing one of the following application virtualization solutions from Microsoft (choose the one that meets your need in terms of functionality and manageability):

Conclusion

The bottom line in mixed environments (different versions/architectures of Windows/applications) is to keep things simple and play it safe. Your USV strategy should be to virtualize only user data folders like Documents (and possibly also Desktop, Pictures, etc.) and you should use FR together with OF to make user data available to users from any computer they log on to. Do not try to virtualize user settings using RUP or by redirecting the AppDataRoaming folder. If possible, try and standardize on a single version/architecture of each of your applications.

Posted in TUTORIALS, Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , | Leave a Comment »