Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘domain controller’

Cloud Deployments with Windows 2012 Active Directory

Posted by Alin D on September 21, 2012

Many services that have traditionally been run on premises are now being moved to public or private clouds, but not every workload is suitable for such a move due to dependencies upon infrastructure components that simply do not work well in the cloud. One such infrastructure component is Active Directory – or at least it used to be.

When Microsoft was preparing to release Office 365, the company knew that it would require Active Directory to function in the cloud. This isn’t a big deal for standalone deployments, but many organizations rely upon hybrid deployments in which some Exchange, SharePoint and Lync servers exist on premises while others reside in the Office 365 cloud. These types of deployments require Active Directory synchronization between the on-premises domain and the cloud domain being used by Office 365.

The synchronization process isn’t pretty. It is messy to set up and the synchronization only works in one direction. Even so, the Office 365 experiment gave Microsoft some invaluable experience with cloud-distributed Active Directory deployments, and this experience was put to good use in Windows Server 2012.

One of the most important Active Directory features in Server 2012 is the new deployment Wizard. This Wizard, which is built on PowerShell, changes the way in which domain controllers are provisioned. For starters, the Wizard performs a number of prerequisite checks so that issues that might have otherwise caused problems with the new domain controller or with the Active Directory as a whole can be avoided. More importantly however, the new deployment Wizard is designed to work remotely. In other words, you no longer have to run the deployment Wizard directly on the server that is being promoted to a domain controller. Thanks to remote PowerShell, cloud-based servers can be promoted to domain controllers.

Upon completion, the Wizard gives the administrator the option of viewing a PowerShell script that contains an exact copy of the commands that were used to provision the domain controller. This script can be used to automate the provisioning of additional domain controllers, thereby making it very easy to perform large-scale Active Directory deployments. Although script generation is new to Windows Server, it is not new to Microsoft. Both Exchange Server 2007 and Exchange Server 2010 are designed to provide administrators with PowerShell cmdlets for operations that were performed through the graphical user interface. It is really nice to see this type of PowerShell output finally put to work in Windows Server.

Another Windows Server 2012 feature that lends itself well to Active Directory is Deployment with Cloning, which allows the administrator to deploy new domain controllers by simply cloning an existing domain controller.

The process works by setting up a virtual server as a domain controller. After doing so, you can create a copy of the domain controller and then authorize the original source domain controller to be cloned. Windows Server 2012 offers the option of providing a configuration file containing information that can be used in the cloning process such as computer names, IP addresses and DNS servers to be used by cloned domain controllers. However, such a configuration file is not mandatory. If you do not provide Windows with a configuration file the system will attempt to automatically provision cloned domain controllers with computer names, IP addresses, etc.

Clearly, Microsoft has done a lot of work around the domain controller deployment process in Windows Server 2012.  It is now more practical to operate hybrid domains in which domain controllers are located both on-premises and in the cloud.

Posted in Windows 2012 | Tagged: , , , , , , | Leave a Comment »

Few Domain Controller security tips

Posted by Alin D on August 19, 2011

A domain controller is just that—a controller. They control authentication, possibly authorization, some accounting, and generally hold the lifecycle of security identities for everything in your company that uses any part of Windows.

As such, special security considerations exist for domain controllers. How do you score on this front? Check out these five tips for hardening the entire environment around your domain controllers (DCs).

1. Limit physical access.

 This is the single biggest mitigating factor you can provide to your overall domain controller security package. The overarching issue here is, your domain controller is the central security authority over everything on your network, and as you well know, there are many trivial ways to obtain information right off a hard disk if you have local, physical access to a machine. Hashes themselves offer everything a cracker needs in order to pass himself off as a true, legitimate, authenticated user, and these are easy to grab if you have the domain controller’s disk in hand. Not to mention the possibilities of actually logging on via those hashes and modifying logon scripts, installing malicious programs that replicate to other domain controllers, and so on.

If you have physical (not virtualized) domain controllers, then before you do anything else, buy a cage and a secure lock and put them behind it. Don’t let a DC run under the admin’s desk, or have your data center be a small closet with no lock. It holds the keys to the kingdom, your company’s security treasury, so secure it like you would blank checks: under lock and key.

2. Design correctly from the start.

Active Directory topology will contain threats so that even if a DC is compromised, your entire network of forests doesn’t have to be flattened and rebuilt. Make sure your forest and domains reflect the real, physical locations you have in different cities, counties, and countries; have your organizational units match the types of machines and people you have in your company; and let security groups represent the hierarchy of your organizational chart. Then, if a DC in one forest for Europe is compromised, you don’t have to rebuild Asia.

3.Virtualize your domain controllers.

 By using virtual machines (VMs) as your domain controllers, you can then encrypt the disks on which your virtual hard disks reside using BitLocker or some other full-drive-encryption product. Then, ensure the host machines running those VMs are not joined to the domain. If by some chance someone makes off with your host machine and the DCs, the chances of decrypting the hard drive to get access to the VHDs presents yet another obstacle to an attacker planting nefarious things in your directory.

4. Follow security trust best practices

 Know your boundaries, as security experts say. There’s a fantastic guide to understanding trusts and the various considerations therein on TechNet. Pay close attention to the Selective Authentication section, a great way to prevent random access attacks.

5. Secure the Directory Services Restore Mode password moreso than any other password.

Directory Services Restore Mode (DSRM) is a special mode for fixing Active Directory offline when something’s gone wrong. The DSRM password is a special back door that provides administrative access to the directory. You use this in an offline, text mode state. Protect this password like it’s the one thing that can sink your forest, because it is just that. You can also download a hotfix for Windows Server 2008 that will sync the DSRM password with the domain administrator account—or, if you already have installed Service Pack 2, you have this utility already. Just use this command:

ntdsutil “set dsrm password” “sync from domain account <DomainAdminAccount>” q q

Conclusion

Overall, if a domain controller is stolen or otherwise leaves your company’s possession in an unauthorized way, you can no longer trust that machine—but unfortunately, since that domain controller contains everything valuable and secret about your IT identities, the best (and most regrettable and painful) advice is simply to destroy that forest and rebuild it. Which makes the first point in this article the most prescriptive and proactive best practice there is.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Cleanup the Exchange 2003 from Active Directory to install Exchange 2010

Posted by Alin D on August 12, 2011

When installing Exchange 2010, one area that you can overlook is the state of Active Directory. When you install Exchange 2010, Setup assumes that the Active Directory database is in pristine condition. However, organizations that have been using Exchange for many years often have Exchange 2003 remnants in AD. These remnants can halt an Exchange 2010 installation. The method described in this article should remove the server in question from Active Directory to the point where you’re able to install Exchange 2010.

Warning! Any mistakes made during implementation can damage Exchange servers or you’re the Active Directory forest. Back up domain controllers before attempting any of these techniques.

These procedures are only intended for situations in which a server no longer exists in your organization, but some fragments have been left behind in AD. If the server still exists in your Exchange organization, you should remove it.

To remove any lingering portions of an old Exchange 2003 server from Active Directory, use the Windows Support Tool ADSI Edit.

The Windows Support Tools are included with Windows Server, but are not installed by default. The exact method to install these tools varies depending on the version of Windows you’re using. If you’re having trouble locating it, look for an installer in the SupportTools folder on the Windows installation DVD.

Next, open ADSI Edit, which is located in the Program FilesSupport Tools folder, and connect it to a domain controller, if necessary. Then navigate through the console tree to:

Configuration
CN=Configuration, DC=<your domain>, DC=com
CN=Services
CN=Microsoft Exchange
CN=<your organization name>
CN=Administrative Groups
CN=<Your Administrative Group Name> Note: This could also be an Exchange 5.5 site name
CN=Servers

These container names may not exactly match those in your organization. For example, the first node of the hierarchy that I’ve listed is Configuration. Although this is technically the node’s name, ADSI Edit also displays the name of your domain controller in brackets. For example, on my server this container is listed as Configuration [tazmania.production.com]

Selecting the CN=Servers container lists Exchange servers within the administrative group that you’ve selected. If you need to eliminate references to a specific server, right-click on the server object (not the CN=Servers container) and select Delete. A confirmation will ask if you want to delete that server. Verify that you’ve selected the correct server and click Yes.

 

In some situations, there may be a few stray objects, such as routing group connectors, that must be manually removed. Use Exchange management tools to manually remove these objects.

 

Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

Script to list all global and local groups on a given server

Posted by Alin D on August 2, 2011

Used to list all global and local groups on a given server.

Usage: $script /[s]erver /[g]lobal /[l]ocal /[v]erbose

/server Name of server for which to list all groups.
Server can be a domain controller. If no server
is specified, this defaults to localhost.
/global List only global groups.
/local List only local groups.
/verbose Show group comments.
/help Displays this help message.
use Getopt::Long;
use diagnostics;
use strict;
use Win32::Console;
use Win32::Lanman;

##################
# main procedure #
##################
my (%config);

p_parsecmdline(%config, @ARGV);
p_checkargs();

# set console codepage
Win32::Console::OutputCP(1252);

if ($config{global}) {
p_listglobalgroups($config{server});
} elsif ($config{local}) {
p_listlocalgroups($config{server});
} else {
p_listglobalgroups($config{server});
p_listlocalgroups($config{server});
}

exit 0;

##################
# sub-procedures #
##################

# procedure p_help
# displays a help message
sub p_help {
my ($script)=($0=~/([^\/]*?)$/);
my ($header)=$script." v1.1 - Author: alin@keptprivate.com";
my ($line)="-" x length($header);
print < <EOT;

$header
$line
Used to list all global and local groups on a given server.

Usage: $script /[s]erver /[g]lobal /[l]ocal /[v]erbose

/server Name of server for which to list all groups.
Server can be a domain controller. If no server
is specified, this defaults to localhost.
/global List only global groups.
/local List only local groups.
/verbose Show group comments.
/help Displays this help message.
EOT

exit 1;
}
# procedure p_parsecmdline
# parses the command line and retrieves arguments values
sub p_parsecmdline {
my ($config) = @_;
Getopt::Long::Configure("prefix_pattern=(-|/)");
GetOptions($config, qw(
server|s=s
global|g
local|l
verbose|v
help|?|h));
}
# procedure p_checkargs
# checks the arguments which have been used are a valid combination
sub p_checkargs {
p_help() if defined($config{help});
if (!$config{server}) {
$config{server} = Win32::NodeName();
}
}
# procedure p_listglobalgroups
# lists all global groups on a given server
sub p_listglobalgroups {
my $server = shift;
$server =~ s/\//g;
my (@groups,$group);
my ($header)="Global groups on '\\$server':";
my ($line)="-" x length($header);

if (!$config{verbose}) {
print "n$headern$linen";
}
if (Win32::Lanman::NetGroupEnum("\\$server",@groups)) {
foreach $group (sort (@groups)) {
next if (${$group}{name} eq "None");
if ($config{verbose}) {
$~ = 'GLOBAL';
write;
} else {
print "${$group}{name}n";
}
}
} else {
print "ERROR: ".Win32::FormatMessage(Win32::Lanman::GetLastError());
}

format GLOBAL_TOP =
Group Name Comment Type
--------------------------------- ---------------------------------- -------
.
format GLOBAL =
@< <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< ^<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< global
${$group}{name},${$group}{comment}
~~ ^<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
${$group}{comment}
.

}
# procedure p_listlocalgroups
# lists all local groups on a given server
sub p_listlocalgroups {
my $server = shift;
$server =~ s/\//g;
my (@groups,$group);
my ($header)="Local groups on '\\$server':";
my ($line)="-" x length($header);

if (!$config{verbose}) {
print "n$headern$linen";
}
if (Win32::Lanman::NetLocalGroupEnum("\\$server",@groups)) {
foreach $group (sort (@groups)) {
if ($config{verbose}) {
$~ = 'LOCAL';
write;
} else {
print "${$group}{name}n";
}
}
} else {
print "ERROR: ".Win32::FormatMessage(Win32::Lanman::GetLastError());
}

format LOCAL_TOP =
Group Name Comment Type
--------------------------------- ---------------------------------- -------
.
format LOCAL =
@<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< ^<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< local
${$group}{name},${$group}{comment}
~~ ^<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
${$group}{comment}
.

}

Posted in Perl | Tagged: , , , , | Leave a Comment »

How to Troubleshoot poor Windows logon performance in Active Directory

Posted by Alin D on July 28, 2011

Introduction

Problems based around performance are often the most frustrating to resolve, mainly because there are so many variables to consider. In this article, I will focus on the difficult issue of diagnosing and resolving slow logon performance for users when logging in to their domain accounts.

When troubleshooting any performance problem, you must first define what is anacceptable delay. I’ve seen some environments where users experience 5-10 minute logon times and they don’t complain simply because they are used to it. Then I’ve seen others scenarios where even a one minute delay is considered unacceptable. That’s why it’s important to first define what is reasonable so that you know when you have solved the problem.

Windows logon performance factors

t’s important to consider a variety of factors when looking for the cause of logon performance issues. Some of these factors include:

  • the proximity of domain controllers to your users
  • network connections and available bandwidth
  • hardware resources on the DCs (x64 vs. x86, memory, etc.)
  • the number of Group Policy Objects (GPOs) applied to the user and computer (which directly affects bandwidth)
  • the number of security groups the user and computer are members of (also directly affects bandwidth
  • GPOs containing settings that require extra processing time such as:
    • loopback processing
    • WMI filters
    • ACL filtering

     

  • heavily loaded domain controllers caused by:
    • applications requiring authentication
    • inefficient LDAP queries from user scripts or applications
    • a DC hosting other apps such as Exchange, IIS, SQL Server, etc.

     

  • client configuration
  • memory, disk, processor,etc.
  • network Interface (10/100/1000)
  • subnet mapped properly to the site
  • DNS configuration

Define the scope

I always spend time asking basic questions in order to define the true scope of the problem. This will take some effort because these problems are usually defined by users who complain, while there may also be users who have just learned to live with it. Below are some important questions to ask:

  • Are the problems defined to a single site, security group, OU, department, type of client (laptop or desktop), or OS?
  • Does the problem happen at a particular time of day?
  • Does the problem occur when you are in the office or connecting over the VPN?
  • Describe the symptoms:
    • Does the delay occur at a specific point each time (i.e. “Network Settings” on the logon screen)
    • Does it occur before or after the logon screen?

     

  • When did this start happening?

Data gathering and Tools

There are some basic tools that I use to gather data. For performance problems, I like to cast a wide net and collect all that I can. Here are some examples:

  •  
  • Run Microsft Product Support Reports (MPSreports) on clients and their authenticating DCs. This is a common tool that collects data for all event logs, MSINFO32, NetDiag, IPConfig, drivers, hotfixes and more. Hewlett-Packard also has its own version called HPS Reports which is, in my opinion, superior to Microsoft’s tool and will collect specific Active Directory data if run on a DC. It also collects a plethora of hardware-related information, even for non-HP hardware.
  •  
  • On the client, use Microsoft KB article 221833 to set verbose logging for Winlogon. This will provide excellent details in the %Systemroot%DebugUserModeUserenv.log file. Note that this log does not contain date stamps, so you must:
    1. delete the existing userenv.log from the client
    2. enable verbose logging per KB 221833
    3. logoff, logon, and save the userenv.log to a new location in order to limit data collection for the logon period.
    4. Note that the userenv.log is excellent at following GPO and profile processing, and often you can clearly see where a logon delay occurs, indicated by a long interval between events.

     

  • Enable Net Logon logging. The Netlogon log is located in %systemroot%debug and will be empty if logging is not enabled. This is an excellent source of information. For instance, it will show you which clients in subnets that are not mapped to a site. This can cause a client to go to an out-of-site DC for authentication and result in a longer than expected logon time.
  •  
  • Run Process Monitor from Sysinternals. Look in the Help section for details on enabling boot logging. You can capture the process information during the slow boot to see which processes might be affecting performance.

Other tips for troubleshooting slow client logons

There are a few more quick things you can do to see if your logon performance is caused by a known issue.

First, examine the GPResult.exe and LOGONSERVER environment variable on the client. While MPSreports and HPS Reports collect the GPResult for the logged on user, they don’t collect the LOGONSERVER variable which points to the authenticating DC. This is important because each time a user logs in, the GPOs are downloaded to the client. SYSVOL — which contains the GPOs — is a DFS root, however, and does not obey client site awareness. Instead, it collects the DCs (hosting the SYVOL DFS root) in a randomized order, then the GPOs are downloaded from the first DC in the list.

I have seen situations where clients in a main hub site would go across a slow WAN link to an out-of-site DC in order to get the GPOs, causing very slow logon times. Since this could change on each logon, the problem was intermittent.

Examine the GPResult for the DC that the GPOs were downloaded from and see if the GPOs are coming from an out-of-site DC. Also compare the LOGONSERVER variable to see if the client is being authenticated to an out-of-site DC. The logon delay could be explained through this “normal” behavior using known slow or busy links.

Another good test is to boot to Safe Mode with Networking and see if the delay occurs. If not, then do a Net Start and list all the services started. Then boot in normal mode and run Net Start and list all the services again. The difference should point to services that may be suspect, and eliminating them one at a time should help you identify the problem. You can also try disabling applications that start on boot to see if an application is getting in the way.

One final technique is usually to take a network trace using Netmon, Wireshark or another network capture utility. Since you are trying to capture the logon process, one good way to do this is to connect a dumb hub to the network cable going to the switch, then connect a cable from the hub to the problem PC and connect another cable to another PC or laptop that has Netmon or WireShark installed. Run the capture tool in promiscuous mode and reproduce the logon. This setup will ensure that the capture collects traffic in and out of the client and eliminates the network noise.

These are the basics to get you started. Just remember that there are no magic solutions – it really just takes time and detective work to find the problem. In an upcoming article, I will describe the methods I used in some case studies that should help tie this all together.

Digging deeper

I will now dig a little deeper into how to develop an action plan to eliminate possible causes and, hopefully, find the problem.

Performance, of course, is always a challenge to write about because 1) everyone has a different view of acceptable performance and 2) there are many variables – hardware and software – that can affect performance. I do Active Directory-related troubleshooting for my day job, so that’s the context in which I’ve put this article. I have worked on a number of these issues and will rely on that experience to describe how to attack these problems.

The first thing you need to do is prepare a list of possible causes for slow client logon in general. This could probably be developed into a flow chart, but for now we’ll use a couple of lists and refer to them as we diagnose the problem.

Known causes of slow client logon performance

As I wrote in my previous article, here is a quick summary of what I’ve found can cause client logon delays in Windows. These are not listed in any particular order, and each could be at fault for any given situation:

 

  • Domain controller is unavailable or very busy
  • DC overwhelmed by LDAP traffic
  • DC also runs Exchange, SQL Server, File/print, etc.

 

  • Client is getting Group Policy from an out-of-site DC

 

  • Network traffic (startup/logon traffic is directly tied to the number of groups and GPOs that the computer and user are members of — very predictable)

 

  • Roaming profiles are slow to load

 

  • Inefficient logon scripts

 

  • Inefficient GPOs (filtering, restrictions)

 

  • Large number of GPOs and/or security group memberships

 

  • Viruses

 

  • Network components (drivers, switches, link speeds, dual-homed, network cables, etc.)

 

  • Applications and services are starting on the client at boot

 

  • Antivirus updates, Windows Update downloads

 

  • Faulty images

 

There are probably more possibilities, but this is a good list to start with.

Now let’s examine some questions to ask in order to narrow the scope. This list is in the order that I would ask the questions. Each question is followed by a list of troubleshooting steps to resolve the issue. You will likely find more than one of these will apply, so organize the steps into a logical sequence for an action plan.

  1. When did this start?
    This is tough since you are relying on calls to the help desk, which may not be entirely accurate since some users often just learn to live with these issues. Interview the user and pin down the start of the problem. Then look at what changed, such as software installations, network changes, GPO changes or perhaps another problem that was solved with a hotfix. The answer to this will affect the rest of the questions you ask. (for example, it might be time to move to 64-bit DCs!)

 

  1. Who is affected?
    This is difficult because once again you have to rely on help desk complaints.
  1. One user – Investigate other users that are in the same location and security groups, using the same hardware, etc. to make sure the problem is affecting only one user. Focus on local settings, profiles, workstation configurations, groups, and so on.
  2. Users in only one site – Look for problems at the domain controller or networking issues in the subnet(s). Examine domain controller performance to see if the DC is overwhelmed and can’t handle the load. The LogonServer environmental variable should be examined on each client to determine which DC is authenticating them — don’t assume they are authenticating to a DC in the site as this can change. See if the “problem users” are all authenticating to one DC.
  3. Users across sites – This could be the result of a network issue, new patch installed, etc. Look for something in common among affected users, including when the problem was first seen.
  4. New clients installed since a certain date – Perhaps these users have a new image or OS?
  5. Terminal Services users – Look into local vs. roaming profile issues and terminal server load.
  • Does this happen at the same time every day?
    Have the user log on and off at different times during the day, such as 10 a.m., 2 p.m., 7 p.m. or any other time when logon traffic is light. If the problem goes away, then you can focus on network traffic and DC performance during peak logon periods.
  • Do you have sites across slow-linked networks?
    It is possible – and even common – for clients to authenticate to a local domain controller and get policy from another DC due to the way SYSVOL finds random DFS servers. It is also possible for a client to get policy from a DC in a poorly connected site, and it will change so the problem could be intermittent. I don’t know of a fix for this but have heard that a possible workaround is to hard code the LogonServer environmental variable to a specific DC. If this works in a test, then implement it only on problem clients. I have not done this, but it is worth consideration. The DC used for GPO loading is found in the GPResult output. Run GPRESULT /v on the client.
  • What did you change when this started?
    The most common response to this is “nothing”. After some digging however, you’ll usually find something.
  • Can the affected user reproduce the problem by logging on to another computer?
    In other words, does the problem follow the user? Or can another user who doesn’t have the problem logon to the affected computer and experience the same issue? If you can determine that the problem is tied to the computer itself, it will narrow your attack.
  • Are you using roaming profiles (perhaps on some users and not others)?
    Check the network share and look for roaming profile issues. Also, follow the steps in part one of this article to enable verbose logging for Userenv logs and examine it for more information.
  • Is the user having long delays when logging off?
    This can also cause logon delays due to a bloated profile and registry. For Windows XP and earlier versions, consider implementing the Microsoft User Profile Hive Cleanup Service (UPHClean) to clean up local profiles and registry. UPHClean is implemented in Vista.
  • Are the affected users remote access clients?
    Perhaps the users only have a logon problem when using a remote access connection. Look at your remote connection software or VPN setup and try building a generic Windows connection rather than using your custom connection software. Your ISP and network performance could also be an issue here.
  • Aditional Tips

    Here are some additional tips for finding the cause of these delays:

    • Find a test client. Ideally, you should be able to get a workstation and reproduce the problem without bothering a user.
    • Download and run MPSreports from Microsoft on the client and DC. This collects data for all event logs, MSINFO32, NetDiag, drivers, hotfixes and more. remember, the more data you have, the easier it will be to track down the problem.
    • Run PerfMon on the DC and client, and see if you can match the time of the client problem with some performance spike on the domain controller.
    • Run a network trace and try to determine what is happening during the logon process that causes the delay.
    • See if the problem happens at a specific time of day and if so, examine what is happening at that time. Suspects include AV and Windows updates, scheduled jobs, and client survey software.
    • Review GPO settings. Known performance hits can come from ACL and WMI filters, loopback processing, etc. Determine if any GPO settings were implemented at the time this problem started.

     

    • If the problem follows the user (see item 6 in the question list above), try copying the user account to create a new user. If that account has no problem, recreate the account. I have seen this work in some cases. This test also eliminates the profile. You should try deleting the user’s profile to see if that fixes the issue before recreating the account.
    • Review the logon scripts. They can grow little by little until they become unwieldy and ineffective.

    As I stated before, there are no easy solutions to this problem and it can take a lot of time to debug. The best attack is to review the possible causes, ask the right questions to narrow the scope, and use the tools noted here to gather and analyze data to locate the cause.

    Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

    Proactive Active Directory Monitoring

    Posted by Alin D on July 28, 2011

    Companies go out of their way to ensure proper Active Directory backup procedures, various redundancy solutions and anything else that will help prevent or mitigate a disaster. For the most part, these are mainly reactive solutions.

    Many engineers have become so complacent with backup that they’ve forgotten one very important element, which is to keep Active Directory healthy in the first place. When AD becomes corrupt, it can be restored from a snapshot or repaired with Ntdsutil.exe.

    Being proactive doesn’t mean that planning for a disaster goes out the window. Key elements to disaster prevention include maintaining good backups and making sure snapshots are done on a storage area network, where available. However, there are certain tips and tricks within AD’s functionality that will help keep the entire environment more stable and healthy.

    Protecting AD against “accidental” object deletion

    Almost every engineer has made a mistake within Active Directory. Sometimes it’s a simple misspelling of a user’s name and other times it can be a bit more serious. There have been instances where an administrator logs into AD to perform some type of management and then accidentally deletes an entire organizational unit (OU). What if that OU contains 3000 users? Now what?

    In many situations, the administrator would then have to restore the AD database or try to find the latest AD snapshot. However, in Windows Server 2008 R2, Microsoft gives IT administrators a great option designed to protect Active Directory objects from being accidentally deleted. This option is available for all objects that are manageable through Active Directory Users and Computers, and is enabled by default when you create a new OU. By selecting the “Protect container from accidental deletion” option, an access control entry is added to the access control list on the object.

    Note: By default, the accidental deletion protection is enabled by default only for OUs, and notfor user objects. This means that if you attempt to delete one or more user objects, even if you’re located inside a protected OU, you will succeed.

    With that mentioned, to protect user, group, or computer objects from accidental deletion, you must manually enable this option in the object’s properties. Change the view in ADUC so that it shows the advanced features, open the object’s Properties window, and click on the Object tab. There you can select the accidental deletion protection option.

    Managing AD size by performing off-line defragmentation

    There are preset AD functions that work in the background to keep the environment healthy. For example, the online maintenance cycle keeps the database in check regularly and without administrator interaction. However, although the data within the database is regularly defragmented, the database itself has a tendency to increase in size over time.

    This is especially true if administrators periodically purge database records. For example, it’s quite possible to have a 4 GB Active Directory database that contains less than 1 GB of data, and over 3 GB of empty space. This space can be reclaimed by performing an off-line defragmentation.

    In Windows Server 2008, the Active Directory is a service. Any time that you want to perform maintenance on the Active Directory database, you can take it off-line by simply stopping the Active Directory Domain Service.

    It’s always a good idea to begin the process by performing a full system state backup. Once a successful backup is verified, open Windows Explorer and navigate to theC:WindowsNTDS folder. The Active Directory database is stored in the NTDS.DIT file. You should make note of the size of this file so that you can go back later on and figure out how much space you have reclaimed.

    At this point, you should open the Service Control Manager, and stop the Active Directory Domain Services service. After that’s complete, you will see a message telling you that a number of dependency services also need to be stopped. Click “Yes” to stop these additional services.

    Once all of the necessary services have been stopped, open Command Prompt on the server, and enter the following commands:

    NTDSUTIL

    Activate Instance NTDS

    Files

    Info

    At this point, you should see a summary of the files that are used by the Active Directory database. You can now begin the defragmentation process by entering the following command:

    Compact to c:windowsntdsdefragged

    Keep in mind that depending on the size of your database, this process can take quite a while to complete, and the domain controller that you are defragmenting is unavailable until the Active Directory Domain Services and all of the dependency services are brought back online.

    When the process completes, go to the C:WindowsNTDS folder and rename the NTDS.DIT file to NTDS.OLD. You can delete this file later on, but hang onto it for right now just in case anything goes wrong with the defragmented copy of the database. Now, copy the defragmented database from C:WindowsNTDSDefragged to C:WindowsNTDS.

    Finally, restart the Active Directory Domain Services (the dependency services will restart automatically). Now, you can reference back to see the reduction in space.

    Proactive Tips and Best Practices
    There are many ways to keep your AD environment humming. Given its critical nature, every avenue should be taken to make sure Active Directory does not go down. Below is a brief list of some ways to be proactive when it comes to AD stability, security, and health:

    • Rename or disable the Administrator account (and guest account) in each domain to prevent attacks on your domains.
    • Manage the security relationship between two forests and simplify security administration and authentication across forests.
    • Place at least one domain controller in every site, and make at least one domain controller in each site a global catalog.
      • Sites that do not have their own domain controllers and at least one global catalog are dependent on other sites for directory information and are less efficient.
    • Use global groups or universal groups instead of domain local groups when specifying permissions on domain directory objects replicated to the global catalog.
    • Always have current backups and verify their consistency.
    • To provide additional protection for the Active Directory schema, remove all users from the Schema Admins group, and add a user to the group only when schema changes need to be made. Once the change has been made remove the user from the group.
    • Always monitor AD health by ensuring proper permissions, good OU management, and performing preventative maintenances.

     

    Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

    How to use Set-ExchangeServer cmdlet to manage the workload on Active Directory Domain Controller

    Posted by Alin D on July 21, 2011

    Active Directory requests from Exchange Server can often overload domain controllers. Using Exchange Management Shell’s Set-ExchangeServer command can protect domain controllers from additional stress.This tip from Exchange Server expert Brien Posey explains how AD requests can overload domain controllers, how the Set-ExchangeServer command works and the parameters that the command can be used in conjunction with to control which domain controllers Exchange Server uses.

    Placing Exchange srvers into a dedicated Active Directory site can be a viable option for organizations running Exchange Server 2003. However, since Exchange Server 2007 uses AD site topology for message routing, having a dedicated site can be counterproductive.

    Organizations create dedicated Active Directory sites for Exchange servers to limit the impact on certain domain controllers. Abandoning a site that was previously dedicated to Exchange Server can overwhelm any domain controllers that don’t receive LDAP requests.

    Exchange Server 2007 generally distributes requests across multiple domain controllers instead of overwhelming a single domain controller; however, global catalog servers can be an exception to this. Although Exchange Server tries to evenly distribute AD requests, external factors can play into how well domain controllers handle the increased workload.

    Exchange Server probably isn’t the only AD-dependent application in your network. Other applications generate just as many AD requests. These applications may focus on specific domain controllers, rather than spreading Active Directory requests across multiple ones. This means that you may have some domain controllers that are already overworked — before they even begin servicing Exchange Server requests.

    Another factor to consider is that all your domain controllers may not have the same hardware capabilities. Some domain controllers may be able to service more requests than others because they run on more robust hardware.

    If external factors are a concern, you can mitigate this by instructing Exchange Server which domain controller to use. One way is to use Exchange Management Shell’s Set-ExchangeServer command to instruct Exchange servers on which domain controllers to use — or not to use.

    Note: If you use the Set-ExchangeServer command, you should use the –WhatIf parameter first. Appending the –WhatIf parameter to the Set-ExchangeServer command allows you to see what would’ve happened if the command had actually been executed. If you’re satisfied with the results, you can remove the –WhatIf parameter and execute the command.

    Table 1 lists the parameters that can be used with this command to control which domain controllers Exchange Server uses.

    Set-ExchangeServer cmdlet parameters that determine which domain controllers Exchange Server will use.

    Parameter Function
    Identity Use this parameter to specify the name, GUID or distinguished name of the server to which you want to apply the command.
    DomainController This parameter does not place any restrictions on domain controller selection. It specifies which domain controller should be used when the Set-ExchangeServer command is processed.
    StaticConfigDomainController Tells Exchange Server which domain controller the server should use in conjunction with DSAccess.
    StaticDomainControllers Provides Exchange Server with a list of domain controllers that the specified server should use for directory service access (see note below).
    StaticExcludedDomainControllers Excludes one or more domain controllers from being used by a specified Exchange server.
    StaticGlobalCatalogs Allows you to provide the specified Exchange server with a list of the global catalog servers it should use.

    The Set-ExchangeServer command can force a specified server to use (or not use) specific domain controllers. Under normal circumstances, avoid assigning Exchange Server a static list of domain controllers.

    However, if you previously used a dedicated Active Directory site for Exchange, then using a static list of domain controllers is preferred, as long as the placement of the dedicated site does not negatively impact message routing.

    Note: If you must use the Set-Exchange command to provide Exchange with a static list of domain controllers, I’d recommend informing Exchange Server about which domain controllers it should not use, rather than which domain controllers it should use. If you only authorize Exchange Server to use one or two domain controllers — and those domain controllers become inaccessible — then Exchange Server will fail even though there may be other domain controllers online.

    Posted in TUTORIALS | Tagged: , , , , | Leave a Comment »

    How to Configure Active Directory Sites and Replication

    Posted by Alin D on June 22, 2011

    Why is needed Active Directory Sites

    Nowadays, most companies do business from multiple office locations, which might be spread across a single metropolitan area or encompass an entire state, country, or even multiple international locations. Active Directory includes the concept of sites, which are groupings of computers and other objects that are connected by a high speed

    local area network (LAN) connection.

    An individual site includes computers that are on one or more Internet Protocol (IP) subnets. It can encompass one building or several adjacent buildings in a campus setting. Image below shows an example with two sites, one located in Los Angeles and the other in Dallas. Sites are connected with each other by slower wide area network (WAN) connections that might not always be available and are always configured with separate IP subnets. It is important to configure diverse locations connected by WAN links as separate sites to optimize the use of the WAN link, especially if your company needs to pay for the link according to the length of time it is active or the amount of data sent across it.

    The following are several benefits that you achieve by creating sites:

    Configurable replication: You can configure replication between sites to take place at specified intervals and only during specified times of the day. Doing so enables you to optimize bandwidth usage so that other network traffic between sites can proceed without delay.

    Isolation of poorly connected network segments: You can place network segments connected by less reliable connections such as dial-up links in their own site and bridge these sites according to network connectivity.

    Site-based policies: If certain locations such as branch offices need policies that should not be applied elsewhere on the network, you can configure site-based Group Policy to apply these policies.

    The following are several factors you should take into account when planning the site structure of your organization:

    Physical environment: You should assess the geographic locations of your company’s business operations, together with the nature of their internal and external links. It might be possible to include multiple locations (for example, on a campus) in a single site if they are connected by reliable high-speed links (such as a T3 line).

    Data replication versus available bandwidth: A location that needs the most up-to-date Active Directory information and is connected with a high-speed link can be on the same site as the head office location. When properly configured, the network’s site structure should optimize the process of Active Directory Domain Services (AD DS) replication.

    Types of physical links between sites: You should assess the type, speed, availability, and utilization of each physical link. AD DS includes site link objects that you can use to determine the replication schedule between sites that it links. A cost value can also be associated with it; this value determines when and how often replication can occur.

    Site links and site link bridges: Active Directory provides for site links and site link bridges so that you can group sites together for optimized intersite replication.

    These concepts are discussed later in this article.

    How to configure sites and subnets

    Active Directory provides the Active Directory Sites and Services snap-in, which enables you to perform all configuration activities pertinent to sites. When you first open this snap-in, you will notice folders named Subnets and Inter-Site Transports as well as a site named Default-First-Site-Name. By default, the new domain controller is placed in this site when you first install Active Directory. You can rename this site to whatever you want, just as you can rename a file or folder.

    This section shows you how to create sites, add domain controllers to sites, and associate IP subnets with specific sites.

    Creating Sites

    You can create additional sites by using the Active Directory Sites and Services snap-in, as described by the following procedure:

    Step 1. Click Start > Administrative Tools > Active Directory Sites and Services.

    Step 2. Right-click Sites and choose New Site.

    Step 3. In the New Object – Site dialog box shown in next screenshot, type the name of the site. Select a site link object from the list provided and then click OK.

    Step 4. Windows informs you that the site has been created and reminds you of several other tasks that you should perform, as shown in next image. Click OK.

    After you have created the new site, it appears in the console tree of Active Directory Sites and Services. The new site includes a default Servers folder that includes all domain controllers assigned to the site, as well as a NTDS Site Settings container that is described in a later section.

    Adding Domain Controllers

    The first task you should undertake is to add one or more domain controllers to your new site. To do this, proceed as follows:

    Step 1. Open Active Directory Sites and Services and expand the site that currently holds the domain controller that you want to move to the new site.

    Step 2. Select the Servers folder to display the domain controllers currently located in this site in the details pane.

    Step 3. Right-click the server you want to move and choose Move.

    Step 4. In the Move Server dialog box shown in below image, select the site to which you want to move the server and then click OK.

    Creating and Using Subnets

    Recall that the purpose of using sites is to control Active Directory replication across slow links between different physical locations. By default, Active Directory does not know anything about the physical topology of its network. You must configure Active Directory according to this topology by specifying the IP subnets that belong to each site you have created. Use the following procedure to assign subnets to each site:

    Step 1. In the console tree of Active Directory Sites and Services, right-click the

    Subnets folder and choose New Subnet.

    Step 2. In the New Object – Subnet dialog box shown in next image, enter the IPv4 or IPv6 subnet address prefix being configured.

    Step 3. Select the site for this network prefix from the sites listed and then click OK. The subnet you have added appears in the console tree under the Subnets folder.

    You can view and edit a limited number of properties for each subnet in Active Directory Sites and Services. Right-click the subnet and choose Properties. The various tabs of the Properties dialog box shown in next image enable you to do the following:

    General: Provide a description of the site. You can also change the site to which the subnet is assigned. The description is for information purposes and helps you document the purpose of the site for others who might be administering the site later.

    Location: Provide a description of the location of the site. This is also for information purposes.

    Object: View the site’s Active Directory canonical name (CN) and its update sequence number (USN), and protect it from accidental deletion.

    Security: Modify security permissions assigned to the object.

    Attribute Editor: View and edit attributes set by Active Directory for the site.

    Configuring Active Directory Replication

    You have learned that all domain controllers act as peers and that most changes to AD DS can be made at any domain controller. AD DS uses the process of multimaster replication to propagate these changes to other domain controllers in the domain. In addition, the global catalog is replicated to other global catalog servers in the forest. Application directory partitions are replicated to a subset of domain controllers in the forest, and the schema and configuration partitions are also replicated to all domain controllers in the forest. You can see that replication is an important process that must take place in a timely manner so that updates to AD DS are synchronized properly among all domain controllers in the forest. The amount of replication necessary to maintain AD DS could easily overwhelm network bandwidth, especially on slow-speed WAN links.

    Concepts of Active Directory Replication

    In general, the process of replication refers to the copying of data from one server to another. This can include both the AD DS database and other data such as files and folders. In particular, Active Directory replicates the following components or partitions of the database to other domain controllers:

    Domain partition: Contains all domain-specific information such as user, computer, and group accounts. This partition is replicated to all domain controllers in its domain but is not replicated to other domains in the forest.

    Configuration partition: Contains forestwide configuration information. This partition is replicated to all domain controllers in the forest.

    Schema partition: Contains all schema objects and attributes. This partition is replicated from the schema master to all other domain controllers in the forest.

    Application directory partitions: These partitions contain application-specific (such as DNS) information that is replicated to specific domain controllers in the forest.

    Global catalog: As introduced in Chapter 1, the global catalog contains partial information on all objects in each domain that is replicated to all global catalog servers in the forest.

    Active Directory replicates all data in these partitions to the specified domain controllers in the domain so that every domain controller has an up-to-date copy of this information. By default, any domain controller can replicate data to any other domain controller; this process is known as multi-master replication. A read-only domain controller (RODC) can receive updated information from another domain controller (inbound replication), but it cannot replicate any information to other servers. If your domain that is spread across more than one site, a single domain controller in each site known as a bridgehead server replicates information to bridgehead servers in other sites; other domain controllers in each site replicate information to domain controllers in their own site only.

    An RODC can receive updates to the schema, configuration, and application directory partitions and the global catalog from any Windows Server 2003 or 2008 domain controller in its domain; however, it can receive updates to the domain partition from domain controllers running Windows Server 2008 only.

    Intersite and Intrasite Replication

    Most of the discussion in this chapter centers around the topic of intersite replication because this is the type of replication that you will need to configure and troubleshoot.

    However, you should keep in mind that replication also occurs between domain controllers on the same site, in other words, intrasite replication. The KCC automatically configures intrasite replication so that each domain controller replicates with at least two others. In this way, should one replication partner become temporarily unavailable, no domain controller will miss an update. The KCC uses a default bidirectional ring topology, with additional connections as required to limit the number of hops between replication partners to three or less.

    Intrasite replication is totally automatic and requires no additional configuration after you have established your site topology. It is possible to modify intrasite replication if required; configuration of replication intervals.

    One-Way Replication

    An RODC supports inbound replication of Active Directory including the SYSVOL folder only. This type of replication is referred to as one-way replication. It

    is what makes an RODC suitable for a location such as a branch office where physical security can become an issue. In one-way replication, changes to the AD DS database are replicated to the RODC but outbound replication does not occur; consequently, any changes to the database configured at the RODC are not saved in the database. Note that you can prevent certain attributes from replicating to the RODC.

    It is also possible to configure one-way replication connections between other domain controllers. However, this is not recommended because several problems can occur, such as health check topology errors, staging issues, and problems with the DFS replication database. Microsoft recommends that administrators make changes only at servers designated as primary servers. You can also configure share permissions on the destination servers so that normal users have only Read permissions. Then it is not possible to replicate changes backward from the destination servers and you have, in effect, a one-way replication scheme.

    Bridgehead Servers

    A bridgehead server is the domain controller designated by each site’s KCC to take control of intersite replication. The bridgehead server receives information replicated from other sites and replicates it to its site’s other domain controllers. It ensures that the greatest portion of replication occurs within sites rather than between them. In most cases, the KCC automatically decides which domain controller acts as the bridgehead server. However, you can use Active Directory Sites and Services to specify which domain controller will be the preferred bridgehead server by using the following steps:

    Step 1. In Active Directory Sites and Services, expand the site in which you want to specify the preferred bridgehead server.

    Step 2. Expand the Servers folder to locate the desired server, right-click it, and then choose Properties.

    Step 3. From the list labeled Transports available for inter-site data transfer, select the protocol(s) for which you want to designate this server as a preferred bridgehead server and then click Add.

    As shown for the IP transport protocol in next image, the protocol you have configured appears in the list on the bottom-right of the dialog box.

    Replication Protocols

    The IP and SMTP replication protocols used by Active Directory to replicate the AD DS database between sites were introduced earlier in this article.

    If you use SMTP replication, the data is replicated according to times you have configured for transmitting email messages. You must install and configure an enterprise certification authority (CA) and SMTP on all domain controllers that use the SMTP site link for data replication. The CA signs the SMTP messages exchanged between domain controllers, verifying the authenticity of AD DS updates. SMTP replication utilizes 56-bit encryption.

    Ports Used for Intersite Replication

    The default ports used by ISTG for RPC-based intersite replication are the TCP and UDP ports 135. LDAP over Secure Sockets Layer (SSL) employs TCP and UDP ports 636, Kerberos employs TCP and UDP port 88, Server Message Block (SMB) over IP uses TCP and UDP ports 445, and DNS uses TCP and UDP ports 53. Global catalog servers also utilize TCP ports 3268 and 3269. You can modify the default ports for RPC-based replication by editing the following Registry key: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNTDSParameters


    Add a REG_DWORD value named TCP/IP Port and specify the desired port number. In addition, edit the following Registry key:

    HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNTFRSParameters

    Add a REG_DWORD value named RPC TCP/IP Port Assignment and specify the same port number. Configure these changes at every domain controller, and make sure that you have configured all firewalls to pass traffic on the chosen port.

    Replication Scheduling

    Active Directory permits you to schedule replication so that you can control the amount of bandwidth consumed. This is important because bandwidth affects the efficiency of replication. The frequency of replication is a trade-off between bandwidth consumption and maintaining the AD DS database in an up-to-date condition.

    Although you will be mainly concerned with modifying the schedule of intersite replication, we also take a brief look at scheduling intrasite replication in this section.

    Intersite Replication Scheduling

    By default, intersite replication takes place every three hours (180 minutes) and occurs 24 hours a day, seven days a week. You can modify both the interval and frequency of replication, as described here.

    To configure intersite replication scheduling, proceed as follows:

    Step 1. In Active Directory Sites and Services, expand the Inter-Site Transports folder.

    Step 2. Click the transport (normally IP) containing the site link whose schedule you want to modify. The details pane displays all site links and site link bridges you have configured

    Step 3. Right-click the appropriate site link and choose Properties to display the General tab of the properties dialog box for the site link.

    Step 4. In the text box labeled Replicate every, type the number of minutes between replications and then click OK.

    Active Directory processes the interval you enter as the nearest multiple of 15 minutes, up to a maximum of 10,080 minutes (one week).

    If you need to specify that replication not take place during certain times of the day (such as business hours when other WAN traffic must be able to proceed without delay), you can restrict the times that replication takes place. To do so, use the following procedure:

    Step 1. Access the Properties dialog box for the site link whose replication times

    you want to specify, as already described and shown in above image.

    Step 2. To limit the time intervals in which replication can take place, click Change Schedule.

    Step 3. In the Schedule for (site link name) dialog box, select the time block for which you want to deny replication and then click OK.

    Step 4. In the text box labeled Replicate every, use the up/down arrows to specify the desired replication interval or type the replication interval. Then click OK

    You might have to ignore the replication schedule so that replication can occur at any time of day or night. This is useful if you want to ensure that new changes are replicated in a timely manner. To do so, right-click the transport protocol in the console tree of Active Directory Sites and Services, and choose Properties. On the General tab of the protocol’s Properties dialog box, select the Ignore schedules check box and then click OK. Performing this procedure causes Active Directory to ignore availability schedules and replicate changes to AD DS at the configured interval. Site links are always available for replication. Clear the Ignore schedules check box to reenable the replication schedules.

    Notice that this is the same dialog box from which you can choose whether to bridge all site links, as discussed earlier in this article.

    Intrasite Replication Scheduling

    By default, intrasite replication takes place once per hour. You can change this schedule to twice or four times per hour according to specific time blocks and specific connection objects. To configure intersite replication scheduling, proceed as follows:

    Step 1. In Active Directory Sites and Services, expand the site in which the connection you want to schedule is located.

    Step 2. Expand one of the servers included in the intersite replication to reveal the NTDS Settings folder.

    Step 3. Right-click this folder and choose Properties.

    Step 4. On the General tab of the connection’s Properties dialog box, click Change schedule.

    Step 5. On the Schedule for dialog box, select the desired time block and replication interval (once, twice, or four times per hour) and then click OK.

    Forcing Intersite Replication

    If you have performed necessary actions such as adding new users or groups for a branch office, you might want Active Directory replication to occur immediately.

    In such a case, you can force replication from Active Directory Sites and Services by using the following procedure:

    Step 1. In the console tree of Active Directory Sites and Services, expand the server to which you want to force replication.

    Step 2. Select the NTDS Settings folder to display the connection objects in the details pane.

    Step 3. Right-click the desired connection object and choose Replicate Now

    Posted in Windows 2008 | Tagged: , , , , , , | 3 Comments »

    Password security in SQL Server part 1

    Posted by Alin D on June 20, 2011

    SQL Server Password Security

    One of the key ways to protect your SQL Server is to use strong, secure passwords for your SQL Server login accounts. One of the biggest security holes in the SQL Server 2000 and olderversions of Microsoft SQL Server was that the server installed with a blank system administrator (SA) password by default and would allow you to use a blank password, thereby permitting anyone to connect without much work at all. Even with newer versions of Microsoft SQL Server, the SA account is still a potential weakness, as is any SQL Server Authentication based login. This is because SQL Accounts can be easily broken into by brute force password attacks. When using SQL Azure there is no SA account available to you the Microsoft customer work with. The SA account is reserved for the exclusive use of Microsoft.

    When using SQL Azure as your database instance, only SQL Authentication is available. SQL Azure doesn’t support Windows Authentication for use by Microsoft’s customers as the SQL Azure database server doesn’t support being added to a company domain. The Azure database servers do support Windows Authentication buy only for use by the Azure administration team

    within Microsoft. SQL Authentication Logins aremore susceptible to these login attacks than a Windows Authentication login because of the way that these logins are processed. With an SQL Authentication login, each connection to the SQL database passes the actual username and password from the client computer to the SQL Server Engine. Because of this, an attacker can simply sit there passing usernames and passwords to the server until a connection is successfully made.

    With a Windows Authentication Login the process is much, much different from the SQL Authentication process. When the client requests a login using Windows Authentication, several

    components within the Windows Active Directory network are needed to complete the request. This includes the Kerberos Key Distribution Center (KDC) for when Kerberos is used for

    authentication, and the Windows Active Directory Domain Controller for when NTLM (NT LAN Manager) authentication is used. The Kerberos KDC runs on each domain controller within

    an Active Directory domain that has the Active Directory Domain Services (AD DS) role installed.

    The process that occurs when a Windows Authentication connection is established is fairly straightforward once you know the components that are involved. When the client requests

    a connection, the SQL Server Native Client contacts the KDC and requests a Kerberos ticket for the Service Principal Name (SPN) of the Database Engine. If the request to the KDC fails, the SQL Server Native Client will then try the request for a ticket again using NTLM Authentication. This ticket will contain the Security Identifier (SID) of the Windows domain account, as well as the SIDs of the Windows groups that the domain account is a member of.

    Once the SQL Server Native Client has received the ticket from the KDC, the ticket is passed to the SQL Server service. The SQL Server then verifies the ticket back against the Kerberos

    or NTLM server service on the domain controller to verify that the SID exists and is active, and was generated by the requesting computer. Once the Windows ID is confirmed against the domain, the SIDs for the local server groups that the user is a member of are added to the Kerberos ticket and the process within the SQL Server is started. If any of these checks fail, then the connection is rejected. The first thing that the SQL Server will verify is if there is a Windows Authenticated login that matches the user. If there is no specific Windows login, the SQL Server then checks to see if there is a Windows Domain Group or Windows Local Group to which the user belongs. The next check is to see if the login or domain group that has the login as a member is enabled and has been granted the right to connect. The next check is to ensure that the login or domain group has the right to connect to the specific endpoint. At this point the Windows Login has successfully connected to the SQL Server Instance. The next step in the process is to assign the Login ID of the Windows Login as well as any authorized domain groups. These login

    IDs are put together within an internal array within the SQL Server engine to be used by the last step of the authentication process as well as various processes as the user interacts with the objects within the SQL Server databases. The last step of the connection process takes the database name that was included within the connection string (or the login default database if no connection string database is specified) and checks if any of the login IDs contained with the internal array that was just created exist within the database as a user. If one of the login IDs exists within the database, then the login to the SQL Server is complete. If none of the login IDs exist within the database and the database has the guest user enabled, then the user will be connected with the permission of the guest user. If none of the login IDs exist within the database and the guest login is not enabled, then the connection is rejected with a default database specific error message.

    Extended Protection

     

    Expended Protection is a feature of the Windows operating system that was introduced with the release of Windows 2008 R2 and Windows 7. This new feature provides an additional

    level of preauthentication protection for client-to-server communications when both the client and server software support it. As of the writing of this book, the only version of the

    Microsoft SQL Server product that supports this new feature is Microsoft SQL Server 2008 R2. Patches are available from the website http://www.microsoft.com/technet/security/advisory/

    973811.mspx for the older Operating Systems. This new feature enhances the protection that already exists when authenticating domain credentials using Integrated Windows

    Authentication (IWA).

    When Extended Protection is enabled, the authentication requests are both to the Service Principal Name (SPN) of the server which the client application is connecting to, as well as

    to the outer Transport Layer Security (TLS) channel within which the IWA takes place. Extended Protection is not a global configuration; each application that wishes to use Extended

    Protection must be updated to enable the use of Extended Protection.

    If you are using Windows 7 and Windows Server 2008 R2 or later for both the client and server and if the SQL Server 2008 R2 Native Client or later are being used to connect to an SQL Server

    2008 R2 SQL Server or later instance, and Extended Protection is enabled, then Extended Protection must also be negotiated before the Windows process can be completed. Extended Protection uses two techniquesdservice binding and channel bindingdin order to help prevent against an authentication relay attack.

    Service Binding is used to protect against luring attacks by requiring that as part of the connection process, the client sends a signed Service Principal Name (SPN) of the SQL Server service

    to which the client is attempting to connect. As part of the response, the server then validates that the SPN that was submitted by the client matches the one that the server actually

    connected to. If the SPNs do not match, then the connection attempt is refused.

    The service binding protection works against the luring attack as the luring attack works by having another service or application (such as Outlook, Windows Explorer, a .NET application, etc)

    connect to a separate valid compromised connection (such as a file server or Microsoft Exchange server). The attacking code then takes the captured signed SPN and attempts to pass it to the

    SQL server to authenticate. Because the SPNs do not match and the signed SPN is for another service, the connection to the SQL Server from the compromised server is rejected. Service binding requires a negligible one-time cost as the SPN signing happens only once when the connection is being made.

    The channel binding protection works by creating a secure channel between the client and the SQL Server Instance. This is done by encrypting the connection using Transport Layer Security

    (TLS) encryption for all of the traffic within the session. The protection comes by the SQL Server Service verifying the authenticity of the client by comparing the client’s channel binding token (CBT) with the CBT of the SQL Service. This channel binding protects the client from falling prey to both the luring and the spoofing attacks. However, the cost of this protection is much higher because of the TLS encryption, which must be maintained over the lifetime of the connection.

    To enable Extended Protection, you first need to decide whether you wish to use service binding protection or channel binding protection. In order to use channel binding, you must force encryption for all SQL Server connections. With SQL Server encryption disabled, only service binding protection is possible.

    Extended Protection is enabled from within the SQL Server 2008 R2 Configuration Manager for all editions of the Microsoft SQL Server 2008 R2 database engine. Within the SQL Server

    Configuration Manager select “SQL Server Services” from the left hand pane and double click on the SQL Server Service you wish to enable Extended Protection for on the right, selecting the

    Advanced tab from the window that pops up. The Extended Protection option has three values from which you can select. The setting of “Off” will disable Extended Protection and will allow

    any connection whether or not the client supports Extended Protection. The setting of “Allowed” forces Extended Protection from Operating Systems which supported Extended Protection,

    while allowing Operating Systems, which do not support Extended Protection to connect without error. The setting of “Required” will tell the SQL Server to accept from client computers only those connections that have an Operating System that supports Extended Protection. If your SQL Server has multiple Service Principal Names (SPNs) requested within the Windows domain, you will need to configure the Accepted NTLM SPNs setting. This setting supports up to 2048 characters and accepts a semicolonseparated list of the SPNs that the SQL Server will need to accept.

    As an example, if the SQL Server needed to accept the SPNs MSSQLSvc/ server1.yourcompany.local and MSSQLSvc/ server2. yourcompany.local, then you would specify a value of “MSSQLSvc/server1.yourcompany.local;MSSQLSvc/server2.yourcompany.local” in the Accepted NTLM SPNs setting as shown in screenshot below.

    After changing any of the Extended Protection properties, you will need to restart the SQL Server Instance for the settings change to take effect. As SQL Azure servers are not installed on Microsoft’s domain and not the company’s server, Extended Protection is not available when using SQL Azure.

    Configuring the Accepted NTLM SPNs

    Service Principal Names (SPNs) are unique service names within a Windows domain that uniquely identify an instance of a service regardless of the system that the service is running on, or

    how many services are running on a single machine. While a single SPN can only reference a single instance of a service, a single instance of a service can havemultiple SPNs registered to

    it. The most common reason for multiple SPNs for a service would be that a service needs to be accessed under multiple server names. Before an SPN can be used by Kerberos authentication, it must be registered within the Active Directory. The SPN when created is registered to a specific account within the domain. The account to which the SPN is registered must be the one under which the Windows service will be running. Because an SPN can only be registered to a single service, this means that an SPN can only be registered to a single Windows account. If the account will be running Windows service changes, then the SPN must be removed from the original account and assigned to the new account. When the client software attempts to connect using Kerberos authentication, the client locates the instance of the service and creates the SPN for that service. The client software then connects to the remote service and presents the created SPN for the service to authenticate. If the authentication fails, the client disconnects returning an error message to the end user. The client computer is able to create an SPN for the remote

    service very easily as the format for an SPN is very simple. The format for an SPN is <service class>/ <host>: <port>/ <servicename>. The <service class> and <host> values are required

    while the <port> and <service name> values are optional. In the case of Microsoft SQL Server the <service class> value will be MSSQLSvc, while the <host> value will be the name that the

    client computers will use to connect to the SQL Server. As an example, for an SQL Server instance listening on the default TCP port 1433 on a server named DB1.contoso.local and a

    Windows account named CONTOSOsqlserver would look like “MSSQLSvc/DB1.contoso.local:1433/CONTOSOsqlserver”.SPNs are created automatically when the SQL Service starts up, but

    only for the default name under which the service will be running. Typically this would be the name of the SQL Server. Other SPNs can be manually registered as needed by a member of the

    “Domain Administrators” group by using the setspn command line application with the -A switch followed by the SPN that should be created. If the DB1.contoso.local server needed to also support the name mydatabase.contoso.local, then the command as shown in Example bellow would be used.

    setspn -A MSSQLSvc/mydatabase.contoso.local:1433/CONTOSOsqlserver

    Once the SPN has been created and the SPN has replicated to all the domain controllers, the clients will be able to successfully authenticate against the new SPN. This replication can take

    anywhere from a few seconds to several hours, depending on how the domain replication is configured and the speed of the network links between sites. SPNs do not need to be used with SQL Azure instances as you must use SQL Authentication with SQL Azure, and SPNs are used when using Windows Authentication with Kerberos.

    Posted in SQL | Tagged: , , , , , , , , , , , , , , , , , , , | 1 Comment »

    Active Directory Infrastructure Master explained

    Posted by Alin D on June 2, 2011

    The Infrastructure Master (IM) is a domain-wide FSMO (Flexible Single Master of Operations) role responsible for an unattended process that “fixes-up” stale references, known as phantoms, within the Active Directory database or DIT (Directory Information Table). Phantoms are created on Domain Controllers (DCs) that require a database cross-reference between an object within their own database and an object from another domain within the forest. This occurs, for example, when you add a user from one domain to a group within another domain in the same forest.

    Each DC is individually responsible for creating its own phantoms with the notable exception of Global Catalogs (GCs). Since GCs store a partial copy of all objects within the forest, they are able to create cross-domain references without the need for such phantoms. Phantoms are deemed stale when they no longer contain up-to-date data, which occurs because of changes that have been made to the foreign object the phantom represents, e.g., when the target object is renamed, moved, migrated between domains or deleted. The IM is exclusively responsible for locating and fixing stale phantoms. Any changes introduced as a result of the “fix-up” process must then be replicated to all remaining DCs within the domain.

    Dependent technologies

    The Active Directory database

    Active Directory’s DIT, logs and other database-specific files are maintained using an ESE (Extensible Storage Engine) database typically stored in %SystemRoot%NTDS. An ESE database comprises tables, columns and indices over columns. A number of tables exist that include a data-table and a link-table. The data-table contains the bulk of Active Directory’s objects and their properties. The link-table stores the relationship between cross-referenced objects allowing the DSA (Directory Services Agent) to, for example, efficiently compute a given user’s group membership.

    Residing between the core directory service and ESE is the dblayer (database layer). Thedblayer forms the bridge between the core directory and ESE and ensures that their respective requirements are met.

    Distinguished Name Tags

    Records within Active Directory’s database use a 32-bit numeric key known as a DNT (Distinguished Name Tag). Simplified, DNTs are comparable to row numbers within a spreadsheet. These numeric row references are used sequentially. They do not support re-use and facilitate the means by which records within the Active Directory database cross-reference one another, such as in the case of group membership.

    Note: Distinguished Name Tags are local to each Domain Controller, i.e., they do not replicate. Two Domain Controllers within the same domain will, in almost all cases, use entirely different DNTs to represent the same objects.

    The dblayer

    The dblayer’s cross-referencing mechanism dictates that the two objects involved in a cross-reference are local to the database maintaining them. For circumstances in which one of the objects is not local to the database, Active Directory provides two mechanisms to meet the cross-reference criterion. They are represented by two distinct structural entities: the aforementioned phantom and Foreign Security Principals (FSPs).

    As previously discussed, phantoms provide a database-local reference to objects in other domains within the same forest. FSPs, however, provide a similar local reference to objects beyond the local forest, e.g., objects across an external trust. Phantoms maintain limited local data about the foreign object they represent typically consisting of only three of the target object’s properties:

    DN (Distinguished Name)
    object GUID (Globally Unique Identifier)
    object SID (Security Identifier)

    Note: Phantoms also maintain a reference count indicating how many objects within the local DIT refer to them. When the reference count reaches 0, the phantom is removed.

    In contrast, FSPs maintain only the foreign object’s SID and, perhaps, a foreign security identifier, an implementation-specific component defined upon the origin of a non-Windows security principal.

     

     

    Posted in Windows 2008 | Tagged: , , | Leave a Comment »