Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘Controller’

Few Domain Controller security tips

Posted by Alin D on August 19, 2011

A domain controller is just that—a controller. They control authentication, possibly authorization, some accounting, and generally hold the lifecycle of security identities for everything in your company that uses any part of Windows.

As such, special security considerations exist for domain controllers. How do you score on this front? Check out these five tips for hardening the entire environment around your domain controllers (DCs).

1. Limit physical access.

 This is the single biggest mitigating factor you can provide to your overall domain controller security package. The overarching issue here is, your domain controller is the central security authority over everything on your network, and as you well know, there are many trivial ways to obtain information right off a hard disk if you have local, physical access to a machine. Hashes themselves offer everything a cracker needs in order to pass himself off as a true, legitimate, authenticated user, and these are easy to grab if you have the domain controller’s disk in hand. Not to mention the possibilities of actually logging on via those hashes and modifying logon scripts, installing malicious programs that replicate to other domain controllers, and so on.

If you have physical (not virtualized) domain controllers, then before you do anything else, buy a cage and a secure lock and put them behind it. Don’t let a DC run under the admin’s desk, or have your data center be a small closet with no lock. It holds the keys to the kingdom, your company’s security treasury, so secure it like you would blank checks: under lock and key.

2. Design correctly from the start.

Active Directory topology will contain threats so that even if a DC is compromised, your entire network of forests doesn’t have to be flattened and rebuilt. Make sure your forest and domains reflect the real, physical locations you have in different cities, counties, and countries; have your organizational units match the types of machines and people you have in your company; and let security groups represent the hierarchy of your organizational chart. Then, if a DC in one forest for Europe is compromised, you don’t have to rebuild Asia.

3.Virtualize your domain controllers.

 By using virtual machines (VMs) as your domain controllers, you can then encrypt the disks on which your virtual hard disks reside using BitLocker or some other full-drive-encryption product. Then, ensure the host machines running those VMs are not joined to the domain. If by some chance someone makes off with your host machine and the DCs, the chances of decrypting the hard drive to get access to the VHDs presents yet another obstacle to an attacker planting nefarious things in your directory.

4. Follow security trust best practices

 Know your boundaries, as security experts say. There’s a fantastic guide to understanding trusts and the various considerations therein on TechNet. Pay close attention to the Selective Authentication section, a great way to prevent random access attacks.

5. Secure the Directory Services Restore Mode password moreso than any other password.

Directory Services Restore Mode (DSRM) is a special mode for fixing Active Directory offline when something’s gone wrong. The DSRM password is a special back door that provides administrative access to the directory. You use this in an offline, text mode state. Protect this password like it’s the one thing that can sink your forest, because it is just that. You can also download a hotfix for Windows Server 2008 that will sync the DSRM password with the domain administrator account—or, if you already have installed Service Pack 2, you have this utility already. Just use this command:

ntdsutil “set dsrm password” “sync from domain account <DomainAdminAccount>” q q

Conclusion

Overall, if a domain controller is stolen or otherwise leaves your company’s possession in an unauthorized way, you can no longer trust that machine—but unfortunately, since that domain controller contains everything valuable and secret about your IT identities, the best (and most regrettable and painful) advice is simply to destroy that forest and rebuild it. Which makes the first point in this article the most prescriptive and proactive best practice there is.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

How to Troubleshoot poor Windows logon performance in Active Directory

Posted by Alin D on July 28, 2011

Introduction

Problems based around performance are often the most frustrating to resolve, mainly because there are so many variables to consider. In this article, I will focus on the difficult issue of diagnosing and resolving slow logon performance for users when logging in to their domain accounts.

When troubleshooting any performance problem, you must first define what is anacceptable delay. I’ve seen some environments where users experience 5-10 minute logon times and they don’t complain simply because they are used to it. Then I’ve seen others scenarios where even a one minute delay is considered unacceptable. That’s why it’s important to first define what is reasonable so that you know when you have solved the problem.

Windows logon performance factors

t’s important to consider a variety of factors when looking for the cause of logon performance issues. Some of these factors include:

  • the proximity of domain controllers to your users
  • network connections and available bandwidth
  • hardware resources on the DCs (x64 vs. x86, memory, etc.)
  • the number of Group Policy Objects (GPOs) applied to the user and computer (which directly affects bandwidth)
  • the number of security groups the user and computer are members of (also directly affects bandwidth
  • GPOs containing settings that require extra processing time such as:
    • loopback processing
    • WMI filters
    • ACL filtering

     

  • heavily loaded domain controllers caused by:
    • applications requiring authentication
    • inefficient LDAP queries from user scripts or applications
    • a DC hosting other apps such as Exchange, IIS, SQL Server, etc.

     

  • client configuration
  • memory, disk, processor,etc.
  • network Interface (10/100/1000)
  • subnet mapped properly to the site
  • DNS configuration

Define the scope

I always spend time asking basic questions in order to define the true scope of the problem. This will take some effort because these problems are usually defined by users who complain, while there may also be users who have just learned to live with it. Below are some important questions to ask:

  • Are the problems defined to a single site, security group, OU, department, type of client (laptop or desktop), or OS?
  • Does the problem happen at a particular time of day?
  • Does the problem occur when you are in the office or connecting over the VPN?
  • Describe the symptoms:
    • Does the delay occur at a specific point each time (i.e. “Network Settings” on the logon screen)
    • Does it occur before or after the logon screen?

     

  • When did this start happening?

Data gathering and Tools

There are some basic tools that I use to gather data. For performance problems, I like to cast a wide net and collect all that I can. Here are some examples:

  •  
  • Run Microsft Product Support Reports (MPSreports) on clients and their authenticating DCs. This is a common tool that collects data for all event logs, MSINFO32, NetDiag, IPConfig, drivers, hotfixes and more. Hewlett-Packard also has its own version called HPS Reports which is, in my opinion, superior to Microsoft’s tool and will collect specific Active Directory data if run on a DC. It also collects a plethora of hardware-related information, even for non-HP hardware.
  •  
  • On the client, use Microsoft KB article 221833 to set verbose logging for Winlogon. This will provide excellent details in the %Systemroot%DebugUserModeUserenv.log file. Note that this log does not contain date stamps, so you must:
    1. delete the existing userenv.log from the client
    2. enable verbose logging per KB 221833
    3. logoff, logon, and save the userenv.log to a new location in order to limit data collection for the logon period.
    4. Note that the userenv.log is excellent at following GPO and profile processing, and often you can clearly see where a logon delay occurs, indicated by a long interval between events.

     

  • Enable Net Logon logging. The Netlogon log is located in %systemroot%debug and will be empty if logging is not enabled. This is an excellent source of information. For instance, it will show you which clients in subnets that are not mapped to a site. This can cause a client to go to an out-of-site DC for authentication and result in a longer than expected logon time.
  •  
  • Run Process Monitor from Sysinternals. Look in the Help section for details on enabling boot logging. You can capture the process information during the slow boot to see which processes might be affecting performance.

Other tips for troubleshooting slow client logons

There are a few more quick things you can do to see if your logon performance is caused by a known issue.

First, examine the GPResult.exe and LOGONSERVER environment variable on the client. While MPSreports and HPS Reports collect the GPResult for the logged on user, they don’t collect the LOGONSERVER variable which points to the authenticating DC. This is important because each time a user logs in, the GPOs are downloaded to the client. SYSVOL — which contains the GPOs — is a DFS root, however, and does not obey client site awareness. Instead, it collects the DCs (hosting the SYVOL DFS root) in a randomized order, then the GPOs are downloaded from the first DC in the list.

I have seen situations where clients in a main hub site would go across a slow WAN link to an out-of-site DC in order to get the GPOs, causing very slow logon times. Since this could change on each logon, the problem was intermittent.

Examine the GPResult for the DC that the GPOs were downloaded from and see if the GPOs are coming from an out-of-site DC. Also compare the LOGONSERVER variable to see if the client is being authenticated to an out-of-site DC. The logon delay could be explained through this “normal” behavior using known slow or busy links.

Another good test is to boot to Safe Mode with Networking and see if the delay occurs. If not, then do a Net Start and list all the services started. Then boot in normal mode and run Net Start and list all the services again. The difference should point to services that may be suspect, and eliminating them one at a time should help you identify the problem. You can also try disabling applications that start on boot to see if an application is getting in the way.

One final technique is usually to take a network trace using Netmon, Wireshark or another network capture utility. Since you are trying to capture the logon process, one good way to do this is to connect a dumb hub to the network cable going to the switch, then connect a cable from the hub to the problem PC and connect another cable to another PC or laptop that has Netmon or WireShark installed. Run the capture tool in promiscuous mode and reproduce the logon. This setup will ensure that the capture collects traffic in and out of the client and eliminates the network noise.

These are the basics to get you started. Just remember that there are no magic solutions – it really just takes time and detective work to find the problem. In an upcoming article, I will describe the methods I used in some case studies that should help tie this all together.

Digging deeper

I will now dig a little deeper into how to develop an action plan to eliminate possible causes and, hopefully, find the problem.

Performance, of course, is always a challenge to write about because 1) everyone has a different view of acceptable performance and 2) there are many variables – hardware and software – that can affect performance. I do Active Directory-related troubleshooting for my day job, so that’s the context in which I’ve put this article. I have worked on a number of these issues and will rely on that experience to describe how to attack these problems.

The first thing you need to do is prepare a list of possible causes for slow client logon in general. This could probably be developed into a flow chart, but for now we’ll use a couple of lists and refer to them as we diagnose the problem.

Known causes of slow client logon performance

As I wrote in my previous article, here is a quick summary of what I’ve found can cause client logon delays in Windows. These are not listed in any particular order, and each could be at fault for any given situation:

 

  • Domain controller is unavailable or very busy
  • DC overwhelmed by LDAP traffic
  • DC also runs Exchange, SQL Server, File/print, etc.

 

  • Client is getting Group Policy from an out-of-site DC

 

  • Network traffic (startup/logon traffic is directly tied to the number of groups and GPOs that the computer and user are members of — very predictable)

 

  • Roaming profiles are slow to load

 

  • Inefficient logon scripts

 

  • Inefficient GPOs (filtering, restrictions)

 

  • Large number of GPOs and/or security group memberships

 

  • Viruses

 

  • Network components (drivers, switches, link speeds, dual-homed, network cables, etc.)

 

  • Applications and services are starting on the client at boot

 

  • Antivirus updates, Windows Update downloads

 

  • Faulty images

 

There are probably more possibilities, but this is a good list to start with.

Now let’s examine some questions to ask in order to narrow the scope. This list is in the order that I would ask the questions. Each question is followed by a list of troubleshooting steps to resolve the issue. You will likely find more than one of these will apply, so organize the steps into a logical sequence for an action plan.

  1. When did this start?
    This is tough since you are relying on calls to the help desk, which may not be entirely accurate since some users often just learn to live with these issues. Interview the user and pin down the start of the problem. Then look at what changed, such as software installations, network changes, GPO changes or perhaps another problem that was solved with a hotfix. The answer to this will affect the rest of the questions you ask. (for example, it might be time to move to 64-bit DCs!)

 

  1. Who is affected?
    This is difficult because once again you have to rely on help desk complaints.
  1. One user – Investigate other users that are in the same location and security groups, using the same hardware, etc. to make sure the problem is affecting only one user. Focus on local settings, profiles, workstation configurations, groups, and so on.
  2. Users in only one site – Look for problems at the domain controller or networking issues in the subnet(s). Examine domain controller performance to see if the DC is overwhelmed and can’t handle the load. The LogonServer environmental variable should be examined on each client to determine which DC is authenticating them — don’t assume they are authenticating to a DC in the site as this can change. See if the “problem users” are all authenticating to one DC.
  3. Users across sites – This could be the result of a network issue, new patch installed, etc. Look for something in common among affected users, including when the problem was first seen.
  4. New clients installed since a certain date – Perhaps these users have a new image or OS?
  5. Terminal Services users – Look into local vs. roaming profile issues and terminal server load.
  • Does this happen at the same time every day?
    Have the user log on and off at different times during the day, such as 10 a.m., 2 p.m., 7 p.m. or any other time when logon traffic is light. If the problem goes away, then you can focus on network traffic and DC performance during peak logon periods.
  • Do you have sites across slow-linked networks?
    It is possible – and even common – for clients to authenticate to a local domain controller and get policy from another DC due to the way SYSVOL finds random DFS servers. It is also possible for a client to get policy from a DC in a poorly connected site, and it will change so the problem could be intermittent. I don’t know of a fix for this but have heard that a possible workaround is to hard code the LogonServer environmental variable to a specific DC. If this works in a test, then implement it only on problem clients. I have not done this, but it is worth consideration. The DC used for GPO loading is found in the GPResult output. Run GPRESULT /v on the client.
  • What did you change when this started?
    The most common response to this is “nothing”. After some digging however, you’ll usually find something.
  • Can the affected user reproduce the problem by logging on to another computer?
    In other words, does the problem follow the user? Or can another user who doesn’t have the problem logon to the affected computer and experience the same issue? If you can determine that the problem is tied to the computer itself, it will narrow your attack.
  • Are you using roaming profiles (perhaps on some users and not others)?
    Check the network share and look for roaming profile issues. Also, follow the steps in part one of this article to enable verbose logging for Userenv logs and examine it for more information.
  • Is the user having long delays when logging off?
    This can also cause logon delays due to a bloated profile and registry. For Windows XP and earlier versions, consider implementing the Microsoft User Profile Hive Cleanup Service (UPHClean) to clean up local profiles and registry. UPHClean is implemented in Vista.
  • Are the affected users remote access clients?
    Perhaps the users only have a logon problem when using a remote access connection. Look at your remote connection software or VPN setup and try building a generic Windows connection rather than using your custom connection software. Your ISP and network performance could also be an issue here.
  • Aditional Tips

    Here are some additional tips for finding the cause of these delays:

    • Find a test client. Ideally, you should be able to get a workstation and reproduce the problem without bothering a user.
    • Download and run MPSreports from Microsoft on the client and DC. This collects data for all event logs, MSINFO32, NetDiag, drivers, hotfixes and more. remember, the more data you have, the easier it will be to track down the problem.
    • Run PerfMon on the DC and client, and see if you can match the time of the client problem with some performance spike on the domain controller.
    • Run a network trace and try to determine what is happening during the logon process that causes the delay.
    • See if the problem happens at a specific time of day and if so, examine what is happening at that time. Suspects include AV and Windows updates, scheduled jobs, and client survey software.
    • Review GPO settings. Known performance hits can come from ACL and WMI filters, loopback processing, etc. Determine if any GPO settings were implemented at the time this problem started.

     

    • If the problem follows the user (see item 6 in the question list above), try copying the user account to create a new user. If that account has no problem, recreate the account. I have seen this work in some cases. This test also eliminates the profile. You should try deleting the user’s profile to see if that fixes the issue before recreating the account.
    • Review the logon scripts. They can grow little by little until they become unwieldy and ineffective.

    As I stated before, there are no easy solutions to this problem and it can take a lot of time to debug. The best attack is to review the possible causes, ask the right questions to narrow the scope, and use the tools noted here to gather and analyze data to locate the cause.

    Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

    How to use Set-ExchangeServer cmdlet to manage the workload on Active Directory Domain Controller

    Posted by Alin D on July 21, 2011

    Active Directory requests from Exchange Server can often overload domain controllers. Using Exchange Management Shell’s Set-ExchangeServer command can protect domain controllers from additional stress.This tip from Exchange Server expert Brien Posey explains how AD requests can overload domain controllers, how the Set-ExchangeServer command works and the parameters that the command can be used in conjunction with to control which domain controllers Exchange Server uses.

    Placing Exchange srvers into a dedicated Active Directory site can be a viable option for organizations running Exchange Server 2003. However, since Exchange Server 2007 uses AD site topology for message routing, having a dedicated site can be counterproductive.

    Organizations create dedicated Active Directory sites for Exchange servers to limit the impact on certain domain controllers. Abandoning a site that was previously dedicated to Exchange Server can overwhelm any domain controllers that don’t receive LDAP requests.

    Exchange Server 2007 generally distributes requests across multiple domain controllers instead of overwhelming a single domain controller; however, global catalog servers can be an exception to this. Although Exchange Server tries to evenly distribute AD requests, external factors can play into how well domain controllers handle the increased workload.

    Exchange Server probably isn’t the only AD-dependent application in your network. Other applications generate just as many AD requests. These applications may focus on specific domain controllers, rather than spreading Active Directory requests across multiple ones. This means that you may have some domain controllers that are already overworked — before they even begin servicing Exchange Server requests.

    Another factor to consider is that all your domain controllers may not have the same hardware capabilities. Some domain controllers may be able to service more requests than others because they run on more robust hardware.

    If external factors are a concern, you can mitigate this by instructing Exchange Server which domain controller to use. One way is to use Exchange Management Shell’s Set-ExchangeServer command to instruct Exchange servers on which domain controllers to use — or not to use.

    Note: If you use the Set-ExchangeServer command, you should use the –WhatIf parameter first. Appending the –WhatIf parameter to the Set-ExchangeServer command allows you to see what would’ve happened if the command had actually been executed. If you’re satisfied with the results, you can remove the –WhatIf parameter and execute the command.

    Table 1 lists the parameters that can be used with this command to control which domain controllers Exchange Server uses.

    Set-ExchangeServer cmdlet parameters that determine which domain controllers Exchange Server will use.

    Parameter Function
    Identity Use this parameter to specify the name, GUID or distinguished name of the server to which you want to apply the command.
    DomainController This parameter does not place any restrictions on domain controller selection. It specifies which domain controller should be used when the Set-ExchangeServer command is processed.
    StaticConfigDomainController Tells Exchange Server which domain controller the server should use in conjunction with DSAccess.
    StaticDomainControllers Provides Exchange Server with a list of domain controllers that the specified server should use for directory service access (see note below).
    StaticExcludedDomainControllers Excludes one or more domain controllers from being used by a specified Exchange server.
    StaticGlobalCatalogs Allows you to provide the specified Exchange server with a list of the global catalog servers it should use.

    The Set-ExchangeServer command can force a specified server to use (or not use) specific domain controllers. Under normal circumstances, avoid assigning Exchange Server a static list of domain controllers.

    However, if you previously used a dedicated Active Directory site for Exchange, then using a static list of domain controllers is preferred, as long as the placement of the dedicated site does not negatively impact message routing.

    Note: If you must use the Set-Exchange command to provide Exchange with a static list of domain controllers, I’d recommend informing Exchange Server about which domain controllers it should not use, rather than which domain controllers it should use. If you only authorize Exchange Server to use one or two domain controllers — and those domain controllers become inaccessible — then Exchange Server will fail even though there may be other domain controllers online.

    Posted in TUTORIALS | Tagged: , , , , | Leave a Comment »

    Stuxnet Worm

    Posted by Alin D on October 11, 2010

    Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

    As the story goes, the Stuxnet worm was designed and released by a government–the U.S. and Israel are the most common suspects–specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

    Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

    Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines–and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

    If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location–and that Stuxnet’s authors knew exactly what they were targeting.

    It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

    All the anti-virus programs detect and remove Stuxnet from Windows systems.

    Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

    Over time the attackers swapped out modules that didn’t work and replaced them with new ones–perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

    Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

    We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

    Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

    Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup–surely any organization that goes to all this trouble would test the thing before releasing it–and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

    None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates–India, Indonesia, and Pakistan–are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the origina caveats.

    Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

    Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things–large and small–that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

    Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

    Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

    If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days–but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

    My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

    This essay originally appeared on Forbes.com.

    My alternate explanations for Stuxnet were cut from the essay. Here they are:

    • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
    • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
    • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
    • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

    Note that some of these alternate explanations overlap.

    Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

    Source: Here

    Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

    An alternative solution to Rename Domain with Exchange 2007/2010

    Posted by Alin D on September 24, 2010

    Recently my company registered a new domain name and wanted to me to investigate best possible way to rename domain internally, change websites (hosted on IIS) publicly accessible CNAME to new domain name and change email address for entire organization. Fun hahh!! Google search appears that domain rename possible in win2k3 AD and exchange 2003 SP1.  However, according to Microsoft TechNet I can not rename Windows 2008 native domain with Exchange 2007 . what happen to those who are in the following situation:

    • Rename Business registration
    • Merger and/or Acquisition between companies
    • Change of ownership

    If your management decide to have new user account@newdomain, email addresses@newdomain and websites with new domain name. Now you will not have a choice but  find out a solution regardless of who says what. In this article (Ref: Plan A), I will investigate and share with you what happen if you rename domain on a test environment similar to my organisation i.e. Microsoft Active Directory 2008 and Exchange 2007/2010. Those who are in my situation, I will explain (Ref: Plan B) how I can accomplish same objectives with alternative deployment that means without messing around AD domain and Exchange 2007/2010.  I know plan A is going to fail but worthwhile to produce documents to management and go for plan B. So that business runs smoothly. when time perfect and fund is available then rebuild Microsoft messaging systems for entire organization.

    Light bulbDo NOT perform these steps in a production environment. Domain rename is NOT supported when Exchange 2007/2010 installed in a member server.

    Rename Domain on a Testbed

    Objectives:

    • Rename Domain
    • Migrate IIS to new domain
    • Fix GPO and Exchange (only applicable for Exchange 2003)

    Assumptions:

    image

    Steps involve:

    • Set up your control station for the domain rename operation.
    • Freeze the Forest Configuration
    • Back up all the domain controllers in your forest.
    • Generate the current forest description.
    • Specify the new forest description.
    • Generate domain rename instructions
    • Push domain rename instructions to all domain controllers, and verify DNS readiness.
    • Verify the readiness of the domain controllers.
    • Execute the domain rename instructions
    • Update the Exchange configuration, and restart the Exchange servers (Only applicable for Exchange 2003 SP1)
    • Unfreeze the forest configuration
    • Re-establish external trusts
    • Fix Group Policy objects (GPOs) and links.

    Precaution: Use the following link for Active Directory Backup and Restore in Windows Server 2008 or keep your resume handy

    To verify the forest functionality to Windows Server 2008

    1. Open Active Directory Domains and Trusts.
    2. In the scope pane, right-click Active Directory Domains and Trusts and then click Raise Forest Functional Level.
    3. In the Select an available forest functional level box, click Windows Server 2008, and then click Raise.
    4. Click OK to raise the forest functionality, and then click OK again.

    12

    To analyze and prepare DNS zones for domain rename

    1. Compile a list of DNS zones that need to be created.
    2. Use the DNS MMC snap-in to create the required DNS zones compiled in step 1.
    3. Configure DNS zones according to “Add a forward lookup zone” in Windows Server 2008.
    4. Configure dynamic DNS update according to “Allow dynamic updates” in Windows Server 2008.

    To generate the current forest description file

    In windows server 2008, rendom and GPFix utility are available in %Windir%system32 folder. If you change your directory into c:Windowssystem32 and run rendom /list then domainlist.xml will be placed in same directory.

    1. On the control station, open a command prompt and change to the X:DomainRename directory.
    2. At the command prompt, type rendom /list the following command and press ENTER:
    3. Save a copy of the current forest description file (domainlist.xml) generated in step 2 as domainlist-save.xml for future reference by using the following copy command: copy domainlist.xml domainlist-save.xml

    95

    To edit the domainlist.xml file

    1. Using a simple text editor such as Notepad.exe, open the current forest description file domainlist.xml generated in “STEP 3: Generate the Current Forest Description” earlier in this document.
    2. Edit the forest description file, replacing the current DNS and/or NetBIOS names of the domains and application directory partitions to be renamed with the planned new DNS and/or NetBIOS names.

    67

    8

    To review the new forest description in domainlist.xml

    At the command prompt, type the following and then press ENTER: rendom /showforest

    To generate the domain rename instructions and upload them to the domain naming master

    1. On the control station, open a command prompt.
    2. From within the X:DomainRename directory, execute the following command: rendom /upload
    3. Verify that the domain rename tool created the state file dclist.xml in the directory X:DomainRename and that the state file contains an entry for every domain controller in your forest

    10

    To discover the DNS host name of the domain naming master

    1. On the control station, open a command prompt.
    2. At the command prompt, type the following and then press ENTER: Dsquery server –hasfsmo name

    To force synchronization of changes made to the domain naming master

    The following procedure forces the Active Directory changes initiated at the Domain Naming master DC in STEP 4 to replicate to all DCs in the forest.

    1. On the control station, open a command prompt.
    2. At the command prompt, type the following and then press ENTER: repadmin /syncall /d /e /P /q DomainNamingMaster

    where DomainNamingMaster is the DNS host name of the domain controller that is the current domain naming master for the forest.

    To verify the readiness of domain controllers in the forest

    1. On the control station, open a command prompt and change to the X:DomainRename directory

    2. At the command prompt, type the following command and then press ENTER: rendom /prepare

    3. Once the command has finished execution, examine the state file domainlist.xml to determine whether all domain controllers have achieved the

    To execute the domain rename instructions on all domain controllers

    1. On the control station, open a command prompt.
    2. At the command prompt, type the following and then press ENTER: rendom /execute
    3. When the command has finished execution, examine the state file domainlist.xml to determine whether all domain controllers have reached either the Done state or the Error state.
    4. If the domainlist.xml file shows any DCs as remaining in the Prepared state, repeat step 2 in this procedure as many times as needed until the stopping criterion is met.

    12

    To force Rendom /execute to re-issue the RPC to a DC in the Error state

    1. In the domainlist.xml file, locate the <Retry></Retry> field in the domain controller entry for the DC that you believe should be retried.
    2. Edit the domainlist.xml file such that the field reads <Retry>yes</Retry> for that entry.
    3. The next execution of the rendom /execute command will re-issue the execute-specific RPC to that DC.

    To fix up DFS topology in every renamed domain

    On the control station, open a command prompt. For each Dfs root, if any of the topology components as described above needs to be fixed, type the following command (the entire command must be typed on a single line, although it is shown on multiple lines for clarity) and press ENTER:

    dfsutil /RenameFtRoot /Root:DfsRootPath /OldDomain:OldName /NewDomain:NewName /Verbose

    -Where-

    DfsRootPath is the DFS root to operate on, e.g., \microsoftguru.com.aupublic.

    OldName is the exact old name to be replaced in the topology for the Dfs root.

    NewName is the exact new name to replace the old name in the topology.

    To fix up Group Policy in every renamed domain

    1. On the control station, open a command prompt and change to the X:DomainRename directory.
    2. At the command prompt, type the following command (the entire command must be typed on a single line, although it is shown on multiple lines for clarity) and press ENTER:

    gpfixup /olddns:OldDomainDnsName /newdns:NewDomainDNSName /oldnb:OldDomainNetBIOSName

    /newnb:NewDomainNetBIOSName /dc:DcDnsName 2>&1 >gpfixup.log

    -Where-

    OldDomainDnsName is the old DNS name of the renamed domain.

    NewDomainDnsName is the new DNS name of the renamed domain.

    OldDomainNetBIOSName is the old NetBIOS name of the renamed domain.

    NewDomainNetBIOSName is the new NetBIOS name of the renamed domain.

    DcDnsName is the DNS host name of a domain controller in the renamed domain, preferably the PDC emulator, that successfully completed the rename operation with a final Done state in the dclist.xml state file in “STEP 8: Execute Domain Rename Instructions” earlier in this document.

    For example,

    gpfixup /olddns:wolverine.com.au /newdns:microsoftguru.com.au /oldnb:wolverine /newnb:microsoftguru /dc:dc.wolverine.com.au 2>&1 >gpfixup1.log

    11

    To force replication of the Group Policy fix-up changes made at the DC named in DcDNSName in above step of this procedure to the rest of the DCs in the renamed domain, type the following and then press ENTER: repadmin /syncall /d /e /P /q DcDnsName NewDomainDN

    -Where-

    DcDnsName is the DNS host name of the DC that was targeted by the gpfixup command.

    NewDomainDN is the distinguished name (DN) corresponding to the new DNS name of the renamed domain.

    Repeat steps  in this procedure for every renamed domain. You can enter the commands in sequence for each renamed domain.

    For Example, repadmin /syncall /d /e /P /q dc.microsoftguru.com.au dc=microsoftguru,dc=com, dc=au

    To update the DNS name of the CA machine

    1. On the CA machine, open registry editor and locate the entry CAServerName under HKLMSystemCurrentControlSetCertSvcConfigurationYourCAName.
    2. Change the value in CAServerName to correspond to the new DNS host name.

    To update the Web enrolment file

    To enable proper Web enrollment for the user, you must also update the file that is used by the ASP pages used for Web enrollment. The following change must be made on all CA machines in your domain.

    1. On the CA machine, search for the certdat.inc file (if you have used default installation settings, it should be located in the %windir%system32certsrv directory).

    14

    2. Open the file, which appears as follows:

    1516

    17

    <%’ CODEPAGE=65001 ‘UTF-8%>

    <%’ certdat.inc – (CERT)srv web – global (DAT)a

    ‘ Copyright (C) Microsoft Corporation, 1998 – 1999 %>

    <% ‘ default values for the certificate request

    sDefaultCompany=””

    sDefaultOrgUnit=””

    sDefaultLocality=””

    sDefaultState=””

    sDefaultCountry=””

    ‘ global state

    sServerType=”Enterprise” ‘vs StandAlone

    sServerConfig=”OLDDNSNAMEYourCAName”

    sServerDisplayName=”YourCAName”

    nPendingTimeoutDays=10

    ‘ control versions

    sXEnrollVersion=”5,131,2510,0″

    sScrdEnrlVersion=”5,131,2474,0″

    %>

    3. Change the SServerConfig entry to have the NewDNSName of the CA machine.

    To perform attribute clean up after domain rename

    1. On the control station, open a command prompt.
    2. At the command prompt, from within the X:DomainRename directory, execute the following command: rendom /clean
    Command-line usage to run XDR-fixup.exe

    XDR-fixup.exe /s:start_domainlist.xml /e:end_domainlist.xml [/user:username /pwd:password | *] [/trace:tracefile] /changes:changescript.ldf /restore:restorescript.ldf [/?]

    Note This command is one line. It has been wrapped for readability.

    Command-line usage to verify XDR-fixup.exe

    Use the following command line to verify the changes that are made by XDR-fixup.exe:

    XDR-fixup /verify:restorescript.ldf /changes:verifycorrections.ldf

    To unfreeze the forest configuration

    From within the X:DomainRename directory, execute the following command: rendom /end

    To force remove domain member if fails to join new domain using following command. Then re-join domain manually.

    netdom remove <machine-name> /Domain:<old-domain> /Force”

    To use Control Panel to check for primary DNS suffix update configuration for a computer

    The following procedures explain two ways to view the setting for a member computer that determines whether the primary DNS suffix changes when the name of the membership domain changes.

    1. On a member computer, in Control Panel, double-click System.

    2. Click the Computer Name tab and then click Change.

    3. Click More and then verify whether Change primary domain suffix when domain membership changes is selected.

    4. Click OK until all dialog boxes are closed.

    To use the registry to check for primary DNS suffix update configuration for a computer

    1. On the Start menu, click Run.

    2. In the Open box, type regedit and then click OK.

    3. Navigate to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters.

    4. Verify whether the value of REG_RWORD SyncDomainWithMembership is 0×1. This value indicates that the primary DNS suffix changes when the domain membership changes.

    To determine whether Group Policy specifies the primary DNS suffix for a computer

    1. On a member computer, perform one of the following steps:
    2. At a command prompt, type gpresult. In the output, under Applied Group Policy objects, check to see whether Primary DNS Suffix is listed.

    Open the Resultant Set of Policy Wizard, as follows:

    In Active Directory Users and Computers, right-click the computer object, click All Tasks, and then click Resultant Set of Policy (Logging).

    Open a command prompt and then type: ipconfig /all

    Check the Primary DNS Suffix in the output. If it does not match the primary DNS suffix that is specified in the System Control Panel for the computer (see “To use Control Panel to check for primary DNS suffix update configuration for a computer” earlier in this document), then the Primary DNS Suffix Group Policy is applied.

    u In the registry, check for the presence of the entry Primary DNS Suffix under HKEY_LOCAL_MACHINESoftwarePoliciesMicrosoftSystemDNSclient. If a value is present, then the Primary DNS Suffix Group Policy is applied to the computer.

    To install Support Tools

    1. On the Windows Server 2003 Standard Edition, Windows Server 2003 Enterprise Edition, or Windows Server 2003 Datacenter Edition operating system CD, double-click the Support folder.

    2. In the Support folder, double-click the Tool folder and then run suptools.msi.

    To use ADSI Edit to add DNS suffixes to msDS‑AllowedDNSSuffixes

    The attribute msDS‑AllowedDNSSuffixes is an attribute of the domain object. Therefore, you must set DNS suffixes for each domain whose name is going to change.

    1. On the Start menu, point to Programs, Windows Server 2003 Support Tools, Tools, and then click ADSI Edit.

    2. Double-click the domain directory partition for the domain you want to modify.

    3. Right-click the domain container object, and then click Properties.

    4. On the Attribute Editor tab, in the Attributes box, double-click the attribute msDS‑AllowedDNSSuffixes.

    5. In the Multi-valued String Editor dialog box, in the Value to add box, type a DNS suffix and then click Add.

    6. When you have added all the DNS suffixes for the domain, click OK.

    7. Click OK to closed the Properties dialog box for that domain.

    8. In the scope pane, right-click ADSI Edit and click Connect to.

    9. Under Computer, click Select or type a domain or server.

    10. Type the name of the next domain for which you want to set the primary DNS suffix, and then click OK.

    11. Repeat steps 2 through 7 for that domain.

    12. Repeat steps 8 through 10 to select each subsequent domain and repeat steps 2 through 7 to set the primary DNS suffix for each subsequent domain that is being renamed.

    18

    To apply the Group Policy setting Primary DNS Suffix to groups of member computers

    1. In Active Directory Users and Computers, right-click the domain or organizational unit that contains the group of computers to which you are applying Group Policy.

    -Or-

    In Active Directory Sites and Services, right-click the site object that contains the computers to which you are applying Group Policy.

    2. Click the Group Policy tab.

    3. In the Group Policy object Links box, click the Group Policy object that you want to contain the Primary DNS Suffix setting.

    -Or-

    To create a new Group Policy object, click New and then type a name for the object.

    4. With the Group Policy object selected, click Edit.

    5. Under Computer Configuration, click to expand Administrative Templates, Network, and then click DNS Client.

    6. In the results pane, double-click Primary DNS Suffix.

    7. Click Enabled, and then in the Enter a primary DNS suffix box, type the DNS suffix for the domain whose member computers are in the group you selected in Step 1.

    8. Click OK.

    9. Close the Group Policy dialog box, and then close the properties page for the selected object.

    To configure the redirecting alias DNS entry

    1. In the DNS MMC snap-in, expand the DNS server node to expose the old DNS zone.

    2. Right-click the old DNS zone.

    3. Click New Alias (CNAME ).

    4. In the Alias name box, type the original fully qualified domain name (FQDN) of the HTTP Server..

    5. In the Fully qualified domain name for target host box, type the new FQDN of the HTTP Server, and then click OK.

    At this point you can test the redirection by pinging the FQDN of the old HTTP server. The ping should be remapped to the new FQDN of the HTTP server.

    Issues involving domain rename:

    • XDR-Fixup tool does not work on Exchange 2010
    • Exchange SMTP stops functioning
    • Exchange organization initialization fails

    19

    Simple alternative solutions without renaming domain

    Microsoft does not support domain rename if Exchange 2007 installed in member server. So what could be work around if you have to have new user account, corresponding emails account and web sites with new domain name without renaming domain.

    • Prepare a control workstation station and log on as a domain admin, schema admin and enterprise admin
    • Create a new range of IP in your infrastructure
    • Prepare an windows server 2008 and promote as your new primary domain with new domain name
    • Create External trust between two domains
    • Ask your ISP Add new Host (A) and MX record with new domain

    20

    • Point this new MX record to existing SMTP server
    • Add new domain into trusted domain list

    232122

    • Add new email policy for new domain

    2425

    2627

    2829

    30

    • Change default email address to new email addresses through email property of mailbox using Exchange management console

    31

    • Migrate IIS web sites to new web server
    • Redirect CNAME record to new websites for customers and stakeholder
    • Add 301 redirect using Google webmaster if necessary

    Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

    Easy 10 tips for effective Active Directory design

    Posted by Alin D on September 23, 2010

    Active Directory design is a science, and it’s far too complex to cover all the nuances within the confines of one article. But I wanted to share with you 10 quick tips that will help make your AD design more efficient and easier to troubleshoot and manage.

    1: Keep it simple

    The first bit of advice is to keep things as simple as you can. Active Directory is designed to be flexible, and if offers numerous types of objects and components. But just because you can use something doesn’t mean you should. Keeping your Active Directory as simple as possible will help improve overall efficiency, and it will make the troubleshooting process easier whenever problems arise.

    2: Use the appropriate site topology

    Although there is definitely something to be said for simplicity, you shouldn’t shy away from creating more complex structures when it is appropriate. Larger networks will almost always require multiple Active Directory sites. The site topology should mirror your network topology. Portions of the network that are highly connected should fall within a single site. Site links should mirror WAN connections, with each physical facility that is separated by a WAN link encompassing a separate Active Directory site.

    3: Use dedicated domain controllers

    I have seen a lot of smaller organizations try to save a few bucks by configuring their domain controllers to pull double duty. For example, an organization might have a domain controller that also acts as a file server or as a mail server. Whenever possible, your domain controllers should run on dedicated servers (physical or virtual). Adding additional roles to a domain controller can affect the server’s performance, reduce security, and complicate the process of backing up or restoring the server.

    4: Have at least two DNS servers

    Another way that smaller organizations sometimes try to economize is by having only a single DNS server. The problem with this is that Active Directory is totally dependent upon the DNS services. If you have a single DNS server, and that DNS server fails, Active Directory will cease to function.

    5: Avoid putting all your eggs in one basket (virtualization)

    One of the main reasons organizations use multiple domain controllers is to provide a degree of fault tolerance in case one of the domain controllers fails. However, this redundancy is often circumvented by server virtualization. I often see organizations place all their virtualized domain controllers onto a single virtualization host server. So if that host server fails, all the domain controllers will go down with it. There is nothing wrong with virtualizing your domain controllers, but you should scatter the domain controllers across multiple host servers.

    6: Don’t neglect the FSMO roles (backups)

    Although Windows 2000 and every subsequent version of Windows Server have supported the multimaster domain controller model, some domain controllers are more important than others. Domain controllers that are hosting Flexible Single Master Operations (FSMO) roles are critical to Active Directory health. Active Directory is designed so that if a domain controller that is hosting FSMO roles fails, AD can continue to function — for a while. Eventually though, a FSMO domain controller failure can be very disruptive.
    I have heard some IT pros say that you don’t have to back up every domain controller on the network because of the way Active Directory information is replicated between domain controllers. While there is some degree of truth in that statement, backing up FSMO role holders is critical.
    I once had to assist with the recovery effort for an organization in which a domain controller had failed. Unfortunately, this domain controller held all of the FSMO roles and acted as the organization’s only global catalog server and as the only DNS server. To make matters worse, there was no backup of the domain controller. We ended up having to rebuild Active Directory from scratch. This is an extreme example, but it shows how important domain controller backups can be.

    7: Plan your domain structure and stick to it

    Most organizations start out with a carefully orchestrated Active Directory architecture. As time goes on, however, Active Directory can evolve in a rather haphazard manner. To avoid this, I recommend planning in advance for eventual Active Directory growth. You may not be able to predict exactly how Active Directory will grow, but you can at least put some governance in place to dictate the structure that will be used when it does.

    8: Have a management plan in place before you start setting up servers

    Just as you need to plan your Active Directory structure up front, you also need to have a good management plan in place. Who will administrator Active Directory? Will one person or team take care of the entire thing or will management responsibilities be divided according to domain or organizational unit? These types of management decisions must be made before you actually begin setting up domain controllers.

    9: Try to avoid making major logistical changes

    Active Directory is designed to be extremely flexible, and it is possible to perform a major restructuring of it without downtime or data loss. Even so, I would recommend that you avoid restructuring your Active Directory if possible. I have seen more than one situation in which the restructuring process resulted in some Active Directory objects being corrupted, especially when moving objects between domain controllers running differing versions of Windows Server.

    10: Place at least one global catalog server in each site

    Finally, if you are operating an Active Directory consisting of multiple sites, make sure that each one has its own global catalog server. Otherwise, Active Directory clients will have to traverse WAN links to look up information from a global catalog.

    Posted in Windows 2003, Windows 2008 | Tagged: , , , , , , , , , , | Leave a Comment »

    Kerberos Authentication template – Domain Controller Certificates

    Posted by Alin D on September 16, 2010

    When you install Windows 2008 Certification Authority a new domain controller certificate template named Kerberos Authentication is available. It replaces the Domain Controller Authentication template. If you need more information about the new certificate templates shipped with a Windows 2008 CA you can read this article.

    Here is a tab that outlines the specific attributes of the Domain Controller Authentication and Kerberos Authentication templates:

    Domain Controller Authentication Kerberos Authentication
    Key Usage Client AuthenticationServer Authentication

    Smart Card Logon

    Client AuthenticationServer Authentication

    Smart Card Logon

    KDC Authentication.

    Subject Alternate Name DNS Name : Domain Controller FQDN. DNS Name : Domain FQDN.DNS Name : Domain NetBios name.

    For more information about the KDC Authentication key usage that help assure that smart card users are authenticating against a valid Kerberos domain controller you can read this document: Enabling Strict KDC Validation in Windows Kerberos.

    Having the domain name rather than the domain controller name in the Subject Alternate Name of the certificate proves that the computer presenting the certificate is a domain controller for the domain contained in the Subject Alternate Name. Domain name should also be included in the certificate in order to enable Strict KDC Validation.

    We will describe how to deploy the Kerberos Authentication template certificates on your domain controllers and how to revoke the old certificates issued with the Domain Controller Authentication template once they are useless. We distribute certificates to domain controllers using autoenrollment , to achieve this you need to configure your template (permissions, settings…) and setup a GPO.

    If you want the new Kerberos Authentication template to replace the Domain Controller Authentication template, you need to configure it using certtmpl.msc by setting up the “Superseded Templates” tab. For more information you can have a look at the “Superseding Certificate Templates” chapter of this article.

    Once the template is well configured and ready for autoenrollment, the new certificates will be deployed automatically, you can run the certutil -pulse command on the domain controllers, in order to speed up the autoenrollment process.

    The new domain controller certificate is replaced in the local computer store, messages with source AutoEnrollment are displayed in the eventlog telling us that the Kerberos Authentication certificate is installed.

    With Quest ActiveRoles Management Shell for Active Directory v1.4, you can manage certificates using PowerShell thanks to the Certificate and PKI management CmdLets. First we will check that the Kerberos Authentication certificates are installed on every Domain Controller:
    Get-QADComputer -computerRole ‘DomainController’ | Get-QADCertificate -Revoked:$false -template:’*kerberos authentication*’ | format-table template,IssuedTo -autosize
    Once all your domain controllers have enrolled the new Kerberos Authentication certificates and you have checked everything is running properly, you can disable the old Domain Controller Authentication template with certsrv.msc in order to avoid installing this kind of certificate on a domain controller.

    Then you can revoke the old Domain Controller Authentication certificates which where superseded by the Kerberos Authentication certificates. To achieve that we will combine the Quest CmdLets and the Certutil -revoke command. You just need to retrieve the Domain Controller Authentication certificates serial numbers and specify the reason code for the revocation of these certificates: In our case 4 for Superseded:

    Get-QADComputer -computerRole ‘DomainController’ | Get-QADCertificate -Revoked:$false -template:*domain controller authentication* | foreach {certutil -config %SRV_CA_FQDN%%CA_Common_Name% -revoke $_.SerialNumber 4}
    You just need to adapt:

    • %SRV_CA_FQDN%: Issuing CA server FQDN.
    • %CA_Common_Name%: Certification Authority Common Name.

    By combining the Certutil command line tool and Quest AD CmdLets v1.4, you can make some of your PKI management tasks automatic.

    Posted in Windows 2008 | Tagged: , , , , , , , , , | Leave a Comment »

    How to Install Windows Server 2008 R2

    Posted by Alin D on August 22, 2010

    A thorough walkthrough of how to do a clean installation of Microsoft Windows Server 2008 R2. In this walkthrough, we use Windows Server 2008 R2 Standard Edition. This installation guide is part 1 of a larger feature currently in production, but will guide you completely through the initial installation of the Windows Server 2008 operating system. The secret is out. In this video, I show you how to properly install the Windows Server 2008 R2 operating system. No magic or genius here, just following instructions and understanding how basic network architecture and server roles work. This guide won’t require a $10,000 USD server installation premium either! Here is the information you need to get started with Windows Server 2008 R2 Standard Edition as a Global Primary Domain Controller (PDC) or (GPDC). A Primary Domain Controller can host client systems in the Active Directory environment, which allows for a centralized database of file sharing, account access, and more using powerful Microsoft server technology.

    Hello, this is Mike from Windows 7 Forums. In this video, I am going to show you how to install Windows Server 2008 R2 Standard Edition.

    As you can see, we’re at the boot menu now. We select our CD/DVD-ROM to boot from and Windows is loading files. We begin the installation process by waiting for Windows to start booting. At this time, again, we’ll be installing Windows Server 2008 R2. I’ll go through some basic steps with you and we’ll accelerate the process for you so that it is easier for you to understand.

    As you can see, the installer is now loading. We choose our language, our time, and our keyboard, and we click on next. We click on install now. If we have pre-existing data on our hard drive here, what we do want to do is remove that data. Here we choose what edition of server we have. We’re choosing Windows Server 2008 R2 Standard Full Installation, not the Server Core Installation. We hit next, and accept the license terms. We’re not doing an upgrade.

    We already have data on this drive. What we’re going to do is clear it out. What we do is going to Drive Options and delete every single partition on the drive. Now we have a free drive with unallocated free, and click next.

    In a normal environment, you would have 1 terrabyte free, and you certainly wouldn’t have 100 gigabytes on a server install. It is possible, but you really wouldn’t do it. And in this case, Windows is expanding files now, and we’re going to accelerate the process quite a bit. This is a very clean install, and a very fast install. It takes about 15 to 20 minutes for this to take place, and the expanding of the files takes the longest. So you will see the acceleration now, as we move on right here. We’re accelerating now. Normally, it would take much longer to do this. We’re accelerating the process using some video trickery here.

    It will go through the Feature Installation, Update Installation, and Restarting. Now, we’re coming back and completing the installation. We’ll come back one more time while preparing the computer for first time use.

    Well everyone, we are back, and we’ve entered our initial password to configure Windows 2008 Standard Edition R2. What we really want to do now is start setting up the server. This can be a complex task, and can take a quite long time. It involves a great deal of knowledge about networking, especially if you want to turn this into a domain controller: this is where you may have a video.

    First of all, you want to set activation, but we will not because this is evaluation/educational purposes. Secondly, we set the time zone. We do see that we have a certain amount of freedom in what we want to do here, but what we want to do is set the time zone first. We’ll change the time zone over to [our local time zone].

    What we want to also do is configure networking, but before we even go there, we want to provide a computer name. If this is the only server that we have, we just want to change the name of the computer, instead of the random name that it is given. We change it over to SERVER. Obviously we have to immediately restart. What we want to immediately do is log back in, and here we are now. And now you see we have changed the computer name over to SERVER. What we want to do next, is configure networking.

    We want to make sure that the server itself is using an IP address that does not change (static IP). Because, if we have DHCP, this will become a major problem. We already have some information, and eventually, we’ll be able to set our DNS server to our server itself. If we’re using this server for an office environment, and we’re not hosting anything from it, we will usually have one network adapter. In other cases, we will have two. In this case we have one, and we’re using this server in a strictly office environment with no outbound activity. We are not using this as a web server. So what we want to do is use the following IP address:

    192.168.1.15

    We’ll set the subnet mask automatically.

    We know that the default gateway in this instance is 192.168.1.1. This will vary from router to router. We set the DNS servers, in this instance, to 8.8.8.8 and 4.4.4.2. These are universal DNS servers.

    (Eventually these will change once the server is properly set up as the primary domain controller: PDC)

    We confirm Internet access, and we have made this change now with a static IP address of 192.168.1.15.

    Now, update this server. Enable Windows Automatic Updating and Feedback. This is something that we may want to change later if we don’t like it.

    For downloading and installing updates, usually when we look at the settings, we see “Install Updates Automatically” (recommended) and “Install new updates every day 3 AM”. “Give me recommended updates the same way I receive important updates” What we want to do, in this instance, is actually “Download updates, but let me choose when to install them”. We do this, because we don’t want the server restarting all the time. We also set “Recommended Updates” because we want those as well. And we want all users on the server to be able to install updates on the server. In this case, we only have one user at this time, which is the Administrator.

    And we already 45 important updates available. Quite a few here, major. We want to install these as soon possible, before adding any roles, any features, configuring remote desktop, or even configuring the firewall. So what we want to do is go ahead and install those updates right now.

    Now that we have successfully installed Windows Updates on the server, you may be wondering well what use is the server if there are no server roles installed or no advanced features here. We have installed all updates, and it is a necessity because in order for us to connect any client computers to this server, we need to add the domain controller server role to this server. This is something we will demonstrate in the server role configuration stage of this video. This will finally be one of the final aspects of this video, in order to show you how to properly host other computers on the server.

    Once you join client computers onto the server, which would be Windows Vista, Windows 7, or Windows XP computers, you will actually be able to, not only share files between all of these computers, but also manage these computers through what is called Active Directory (AD) and Group Policy (GP and GPOs). That is by joining these computers to the domain controller.

    In order to do that first, we have to create the domain controller. I will go through that process now, very quickly. Here we are again at our Initial Configuration Tasks menu. What we want to do is go to Add Roles. And to add roles, we simply click Add Roles. Here we get a warning:

    • The administrator account has to have a strong password.
    • Network settings, such as a static IP address, must be configured.
    • The latest security updates from Windows Update must be installed.

    We’ve already met all of those requirements, so we hit next. Now we see a list of server roles that we can use on the server. We have:

    • Active Directory Certificate Services
    • Active Directory Domain Services
    • Active Directory Federation Services
    • Active Directory Lightweight Directory Services
    • Active Directory Rights Management Services
    • Application Server
    • DHCP Server
    • DNS Server
    • Fax Server
    • File Services
    • Hyper-V
    • Network Policy and Access Services
    • Print and Document Services (PDS)
    • Remote Desktop S ervices (RDS)
    • Web Server (IIS)
    • Windows Deployment Services (WDS)
    • Windows Server Update Services (WSUS)

    One of the most interesting ones is WSUS, where you can actually distribute Windows Updates across an entire network, simply by downloading them using one server. But here, we’re not concerned about that. What we want to do most of all: Active Directory Domain Services.

    We can worry about things like DNS later.

    Here, we see .NET Framework 3.5.1 is required to install ADS. So we’ll add the required features by clicking “Add Required Features” next.

    Here are some things to note: We may need at least 2 domain controllers for a domain, in the event of an outage. That is a suggestion from Microsoft, but not a requirement.

    You will be prompted to install DNS to use Active Directory Domain Services.

    After you install the domain server role, use the Active Directory Services Installation Wizard (dcpromo.exe) to make the server a fully functional domain controller. Installing the AD DS will also install the DFS Namespace, DFS Replication, and File Replication services which are required by Directory Service.

    As you can see, .NET Framework is installing. And we have quite a few features to install, so we’ll be right back when the next prompt appears.

    Please see Part 2 (when available) for more information on Windows Server 2008 R2 configuration.

    • Setting up the Domain Controller
    • Configuring DNS
    • Joining Windows Clients to the Domain Controller

    Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

    Win Server 2008 Directory Services, SYSVOL DFS Replication

    Posted by Alin D on August 20, 2010

    The term Active Directory is most commonly equated with the NTDS.DIT database and its characteristics; however, its functionality is affected in a profound manner by content of the SYSVOL folder, residing by default, directly under the Windows directory (although its placement is customizable) and providing file system storage required to implement a wide range of Group Policies.

    Although both NTDS.DIT and SYSVOL get created as a direct result of domain controller promotion and their coherence is necessary to keep directory services fully operational, they are subject to different rules and processes. One of more prominent examples of this dissonance is the use of two distinct replication engines to synchronize their respective contents across distributed set of domain controllers. In particular, since the introduction of Active Directory with the release of Windows 2000 Server product line, SYSVOL relied on File Replication Service (FRS) to accomplish this goal (physically separate and conceptually different from NTDS.DIT replication). Although the same technology still remains available in Windows Server 2008 environment, once you switch to Windows Server 2008 domain functional level, you have an option to take advantage of considerably more robust, efficient, and scalable mechanism based on the Distributed File System Replication (DFS-R).

    The purpose of this article is to describe its advantages over FRS and describe migration path between them.

    In principle, both File Replication Service and Distributed File System-based replication rely on the NTFS constructs (such as Update Sequence Number journal and internal jet database) to keep track of changes to the file system. The latter (which was introduced in Windows Server 2003 R2) offers a number of significant benefits over its predecessor. More specifically, it minimizes network usage by employing block-level (rather than file-level) replication, which means that partial changes to large files do not trigger their full transfer, as well as the Remote Differential Compression (RDC) algorithm, which can also be adjusted to arbitrary threshold or disabled altogether in environments with sufficient network bandwidth. It also has self-healing capabilities, handling more gracefully journal wrap conditions and database corruption. The efficiency and reliability of DFS-R has been further improved in Windows Server 2008, bringing such features as support for RPC asynchronous pipes (boosting the volume of replication requests that can be serviced simultaneously and mitigating blocking behavior that might surface if one of the replication partners is slower or overloaded) and the ability to take advantage of unbuffered I/O, allowing for higher number of concurrent downloads. In addition, the new version of DFS-R is RODC (Read Only Domain Controller) aware, automatically rolling back any changes applied to local replica of SYSVOL (such functionality is missing from FRS maintained volumes, which increases chances for administrative error). Finally, for larger environments, it eliminates the recommended limit on 1200 domain controllers per domain, stipulated in the Windows Server 2003 Active Directory Branch Office Guide.

    Another significant factor to note when contemplating DFS-R deployment concerns the method of transitioning from FRS. The process of migrating SYSVOL replication mechanism to DFS-R has been designed in the manner minimizing the impact on Active Directory availability as well as allowing for gradual, controlled, easy-to-track, and reversible (with the notable exception of the final stage) transition. From the administrative standpoint, the process is managed using a built-in DFS-R specific utility DFSRMig.exe (residing in the %SystemRoot%system32 folder), which triggers each individual migration step (by setting a global migration state, represented internally by a group of designated Active Directory objects and their attributes), automatically carried out across all domain controllers in the same domain. These steps are referred to (using DFS-R nomenclature) as transition states (total of 5), with each starting and ending in a clearly defined set of conditions labeled as stable states (total of 4). Each state gets associated with a unique integer value between 0 to 9, with the stable states occupying lower part of this range.

    The 9 DFS-R States

    • START (stable state 0) designates the initial point of the migration. At this stage, it is critical to make sure that both Active Directory and FRS-based SYSVOL replication function properly. To test the former, use the RepAdmin command line utility (with /showrepl /all or /replsum switches). To verify status of the latter, take advantage of such utilities as FRSDiag, Sonar, or Ultrasound, which is available from the Microsoft Download Center. Make sure that the DFS Replication service is running and configured with Automatic startup on each domain controller. Confirm that the domain operates on the Windows Server 2008 functional level (which implies that all domain controllers are running Windows Server 2008). Verify that all domain controllers function properly and are accessible, paying particular attention to the PDC Emulator (as a matter of fact, you might want to consider running the migration directly from its console). Avoid adding new domain controllers or introducing changes to SYSVOL for the duration of the migration. If you decide to install a Read Only Domain Controller after the domain reaches the PREPARED state, you will need to manually create its DFS-R specific Active Directory settings by executing DFSRMig /CreateGlobalObjects command.

    Finally, make sure that every volume containing SYSVOL folder on each domain controller has a sufficient amount of disk space (at a minimum, it should be capable of holding its copy). Once you have confirmed that all prerequistes are satisfied, enter the PREPARING transitional state by executing DFSRMig /SetGlobalState 1 command while logged on with an account that is a member of the Domain Admin (or Enterprise Admin) group. Note that although it is possible to perform the migration by specifying the final value of 3, representing the ELIMINATED state, such approach is not recommended since it does not provide rollback capabilities).

  • PREPARING (transitional state 4) starts with creation of the DFS-R Global Settings object CN=DFSR-GlobalSettings (and its child objects) under the System container of the default naming context in Active Directory (the change takes place on the PDC Emulator and propagates afterwards via standard AD replication to other domain controllers). Its msDFSR-Flags attribute is used throughout the migration to serve as an indication of the current global status (its value is derived from the msDFSR-Flags attribute of the CN=dfsr-LocalSettings child object of each domain controller computer account (which also gets created when the PREPARING state starts and is updated throughout the migration to reflect status of individual domain controllers). Other settings (under CN=DFSR-GlobalSettings) are used to designate replication content and topology of SYSVOL_DFSR among all domain controllers. Note that PDC Emulator is also responsible for all necessary objects specific to all Read Only Domain Controllers residing in the same the domain (since such changes can not be applied directly to Active Directory database hosted on each RODC). DFS-R service also creates SYSVOL_DFSR folder on the same volume as the SYSVOL and duplicates the content (leveraging robocopy utility) of its domain subfolder (including permissions and junction points). This is intended to minimize time and bandwidth required to complete initial DFS-R based replication with other domain controllers (which takes place in the REDIRECTING state). The current state of migration gets registered using the Local State entry of REG_DWORD datatype under HKLMSystemCurrentControlSetServicesDFSRParametersSysVolsMigrating SysVols registry key. 
  • WAITING FOR INITIAL SYNC (transitional state 5) follows automatically the PREPARING state. It is designed to complete configuration of the SYSVOL_DFSR, including its synchronization with another writable domain controller and setup of the corresponding Jet database. Effectively, once this step successfully completes, there are two separate replication mechanisms, with the FRS handling the original SYSVOL and DFS-R synchronizing its SYSVOL_DFSR-based duplicate. During its execution, the value of Local State registry entry on each domain controller changes from 4 to 5. 
  • PREPARED (stable state 1) is characterized by existence of two independently replicated instances of SYSVOL, with FRS as the primary replication engine, handling the content available via the SYSVOL share and DFS-R managing its non-shared duplicate residing in the SYSVOL_DFSR folder. In order to confirm whether this stage has been reached (which coincides with the event id 8014 registered in the local DFS Replication Event Log), examine output of the DFSRMig /GetMigrationState command, which queries migration state information from all domain controllers and displays the outcome, identifying any that have not reached the migration state set on the PDC Emulator. Remember that such discrepancies should be remediated before you proceed further. Note also that it is possible to manually expedite migration process. This can be done by forcing AD replication (to propagate changes to the global msDFSR-Flags attribute) with repadmin utility (by leveraging its replicate or SyncAll switches). It is also possible to force DFS Replication service to discover the newly applied global migration settings by executing DFSRDiag PollAD with Member attribute pointing to the PDC Emulator. Once you confirm that the PREPARED state is consistent across the domain, you are ready to proceed to the next step by launching the DFSRMig /SetGlobalState 2 command.
  • REDIRECTING (transitional state 6) starts by synchronizing content of the SYSVOL and its DFS-R equivalent SYSVOL_DFSR on the PDC Emulator (which subsequently replicates to other domain controllers). This is done to account for any changes that might have taken place (typically introduced via Group Policy modifications) since the PREPARED state has been reached. Next, the SysvolReady entry under HKLMSystemCurrentControlSetServicesNetlogonParameters registry key is set to 0 (translating into boolean FALSE), which effectively prevents the SYSVOL from being shared. This action is followed by changing the value of SYSVOL share Path parameter to SYSVOL_DFSRsysvol. Finally, SysvolReady gets set back to 1 (corresponding to the boolean TRUE), which reinstates the SYSVOL share (but associated with the new file system location). In addition, the Active Directory Domain Services service is added to the list of dependencies of the DSF Replication service (along with the File Replication service). 
  • REDIRECTED (stable state 2) is somewhat similar to PREPARED, since both SYSVOL replication mechanisms are still active, with DFS-R handling replication of the SYSVOL_DFSR folder and FRS being responsible for SYSVOL. However, the SYSVOL share no longer points to the legacy location but instead provides access to the SYSVOL_DFSRsysvol folder. As the implication of this arrangement, any direct changes to the original SYSVOL folder should be avoided, since they will be lost once you perform remaining migration steps (note, however, that this concern does not apply to modifications applied via Group Policy Management Console, which properly points to the new shared location). As before, you can confirm the status of transition by reviewing output generated by the DFSRMig /GetMigrationState command (successful outcome is also be reflected by an event ID 8017 recorded in the DFS Replication event log on each of domain controllers and the value of Local State registry entry referenced by us earlier). For more in-depth troubleshooting, use DFSRMig_xxx.Log.gz files residing in the Debug subfolder under Windows folder (where xxx is sequentially assigned integer value). This verification is critical, since the next step is non-reversible (the only way to return your domain from the ELIMINATED to START state is the full domain restore). Once you are ready, execute the DFSRMig /SetGlobalState 3 command. 
  • ELIMINATING (transitional state 7) eliminates dependency of the Active Directory Domain Services service on the File Replication Service, stops it temporarily and removes all Active Directory-resident settings pertinent to its SYSVOL replication characteristics. These changes are relayed to other domain controllers via standard AD replication. It also deletes content of the SYSVOL folder. Once these changes are completed, the FRS service is restarted again to accommodate scenarios where other content is replicated using this mechanism. 
  • ELIMINATED (stable state 3) constitutes the final state of migration. As before, its status can be verified by running the DFSRMig /GetMigrationState command or checking the value of Local State registry entry on individual domain controllers (as well as the presence of the event 8019 in the DFS Replication event log). In addition, the SysVol registry entry (under HKLMSystemCurrentControlSetServicesNetlogonParameters key) should point out the SYSVOL_DFSR folder (and the value of SysvolReady entry in the same location should be set to 1).
  •  

  • UNDO REDIRECTING (transitional state 8) facilitates reverting from the REDIRECTED to PREPARED state. To invoke it, execute DFSRMig /SetGlobalState 1. As part of the transition, the SYSVOL_DFSR folder is first synchronized with its SYSVOL counterpart (leveraging robocopy utility) to account for any changes to its content that might have taken place while in REDIRECTED state (typically introduced via Group Policy modifications). This synchronization takes place on the PDC Emulator and is subsequently replicated via FRS-driven replication. 
  • UNDO PREPARING (transitional state 9) permits you to return to the START state from PREPARED state (with FRS mechanism handling SYSVOL replication and the SYSVOL_DFSR folder removed). To invoke it, use DFSRMig /SetGlobalState 0 command. Note that, similarly to the PREPARING transitional state, PDC Emulator will be responsible for deleting all DFS-R Active Directory objects specific to Read Only Domain Controllers.This concludes our overview of characteristics of DFS-R SYSVOL replication available in Windows Server 2008 functional level domains and an outline of the steps involved in transitioning to it from FRS mechanism employed in earlier implementations of Active Directory. Our next article will focus on new Group Policy features.
  • UNDO REDIRECTING (transitional state 8) facilitates reverting from the REDIRECTED to PREPARED state. To invoke it, execute DFSRMig /SetGlobalState 1. As part of the transition, the SYSVOL_DFSR folder is first synchronized with its SYSVOL counterpart (leveraging robocopy utility) to account for any changes to its content that might have taken place while in REDIRECTED state (typically introduced via Group Policy modifications). This synchronization takes place on the PDC Emulator and is subsequently replicated via FRS-driven replication. 
  • UNDO PREPARING (transitional state 9) permits you to return to the START state from PREPARED state (with FRS mechanism handling SYSVOL replication and the SYSVOL_DFSR folder removed). To invoke it, use DFSRMig /SetGlobalState 0 command. Note that, similarly to the PREPARING transitional state, PDC Emulator will be responsible for deleting all DFS-R Active Directory objects specific to Read Only Domain Controllers.This concludes our overview of characteristics of DFS-R SYSVOL replication available in Windows Server 2008 functional level domains and an outline of the steps involved in transitioning to it from FRS mechanism employed in earlier implementations of Active Directory. Our next article will focus on new Group Policy features.
  • Posted in Windows 2008 | Tagged: , , , , , , , , , , | Leave a Comment »