Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘web browser’

How to move Public Folder data from Exchange Server to SharePoint manually

Posted by Alin D on July 21, 2011

Despite advising customers to migrate public folder data to SharePoint, Microsoft hasn’t supplied any tools to automate the process. And although there are a number of third-party utilities on the market, tight budgets can make acquiring these utilities difficult. Fortunately, small- and medium-sized businesses can manually move public folder data to SharePoint without the help of third party.

Public folders contain different data types — messages, calendar items, contacts and tasks. How you migrate content depends on the type of data you’re moving. This tip gives the specific steps for migrating Exchange message data.

  • Step 1. Creating a .pst file for public folder content
    You need to start by moving all your public folder content to one or more .pst files. If you’re using Outlook 2010, be aware that the option to create .pst files is hidden; go to the ribbon’s Home tab and click New Item. Choose the More Items option, then selectOutlook Data File.

    tep 2. Moving public folder content to the .pst file
    After you’ve created the .pst file, click on Outlook’s folder icon to display all available folders. Next, copy your public folder data to the .pst file. At this point, you’re not actually moving the public folder data to the .pst file. Public folders should remain intact until the entire process is complete. That way, if something goes wrong during the migration process, you won’t lose important data.

     

    To copy public folder data to the .pst file, go to your public folder and select all of its contents. Next, right-click on the data and choose the Move -> Copy to Folder option. When prompted, choose the .pst file that you created. Outlook will copy all of the data from the public folder into the .pst file so that both contain identical copies of your data.

    • Although you can copy multiple public folders to a single .pst file, creating a separate .pst file for each folder is easier to organize. Once you’ve made a copy of all public folder data, take the public folder database offline to prevent users from adding content to the folders.
    • Step 3. Migrating public folder content from the .pst file to SharePoint
      After creating a copy of your public folder data, you can migrate it from the .pst file to SharePoint. Begin by opening a browser window and navigating to your SharePoint document library.

      SharePoint document libraries are viewed within a Web browser, which means that the library’s location is displayed within the browser’s address bar as a URL. You must convert this URL into universal naming convention (UNC) format. For example, the URL for the document library on my lab server is http://sharepoint.lab.com/Shared%Documents/Forms/AllItems.aspx

      After converting it to UNC format, the URL becomes \sharepoint.lab.comShared Documents. Then you can map a drive letter to it.

      • How to map drives to SharePoint
        If you have trouble mapping a drive to your SharePoint document library, there are a couple of things you can do:

        • Use your SharePoint server’s fully qualified domain name (FQDN) in the drive mapping. When I used the computer name (SharePoint, instead ofSharePoint.lab.com), I mapped a drive, but the drive appeared empty.
        • I used Windows Vista for this process because I couldn’t get it to work with Windows 7, as shown in Figure 3. This is because a default SharePoint 2010 deployment isn’t running WebDAV.
      • Step 4. Completing the public folder migration
        The final step is to drag data from the .pst file into the mapped network drive. Don’t forget to take into account any applicable subfolders within the network drive. As you can see in Figure 4, the .pst file remains intact after the migration.

        This brings up an interesting point. Why did we even create a .pst file? Doing so gives your public folder data an extra level of protection. I also chose to use a .pst file because the migration to SharePoint can take some time. Creating a .pst file allows you to keep the public folder database offline during the SharePoint migration, which prevents users from modifying the data.

         

        The .pst data should appear within the SharePoint document library (Figure 5). You must refresh your Web browser to display items.

Posted in TUTORIALS | Tagged: , , , , , | Leave a Comment »

New mobile functionality for Windows PowerShell explained

Posted by Alin D on May 31, 2011

Those familiar with Windows PowerShell might also recognize PowerGUI Pro from Quest Software, a graphical front-end for PowerShell that automates common tasks for the command-line system. What you might not know is that there is new functionality that expands on this concept: PowerGUI Pro – MobileShell.

MobileShell runs the PowerGUI Pro command engine on a remote server through a Web browser. Internet Explorer 8 and Mozilla Firefox are both supported out of the box, and the programmers are working on adding support for many other browsers, including Google Chrome and Opera.
MobileShell installs on a Windows Server running Internet Information Services (IIS). It will install by default in a subdirectory named /MobileShell within the default website. All connections to MobileShell are SSL-encrypted by default, so snooping the traffic on the connection is no easier than it would be for any other SSL-protected transaction. Note that you can run MobileShell without HTTPS, but it is not recommended since (among other things) you’ll have to pass credentials in plain sight. Also, if you are disconnected in the middle of a session by a browser crash or network disruption, you can reconnect to the session spawned before in much the same manner as with a Remote Desktop session.

When you connect to MobileShell, you’ll see a three-pane display: an output window at the top, a command window at the bottom, and a pair of panels labeled Recent Commands and Favorites on the right. When you begin typing in a command in the bottom window, MobileShell will provide an auto-completion prompt for the command—a big timesaver since PowerShell commands can be a bit wordy.

The Recent Commands and Favorites panels are more or less what they sound like. The former maintains a history of the commands submitted through MobileShell. Click an item in the list and you can repopulate the command window with the same text. The Favorites panel is a list of commonly-used commands which you can customize by adjusting the settings. Among other things that can be controlled in the settings window is the output buffer size, which is set to 1,000 lines by default.

Finally, when using PowerGUI Pro – MobileShell it is important to avoid clicking the back button in your browser, as you risk closing the current session and losing your work; a minor tradeoff for another strong innovation.

Mobile Shell Pro

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

Moving public folder calendars and tasks to SharePoint 2010

Posted by Alin D on February 7, 2011

Not all Exchange Server public folders contain messages; some folders also contain calendar items, contacts and tasks. Although you can move this type of data to SharePoint 2010 document libraries, the process to do so differs from the process for moving message data. This tip explains the steps involved in migrating public folder calendar, tasks and contacts to SharePoint 2010.

In order to demonstrate a public folder calendar migration, I created an Exchange Server 2010 public folder named Company Calendar and created a few items within it.

To move the entries to SharePoint, open a Web browser and navigate to your SharePoint site. The default front page contains a Calendar link. Click it to display the site’s calendar.

Click on the Calendar option found in the ribbon’s Calendar

Tools section. You will then see a Connect to Outlook icon (Figure 1).

publicSharepoint1[4]
Figure 1. Click the Connect to Outlook icon to begin the migration process.

The SharePoint calendar will open in Outlook, which will only display your personal calendar and the SharePoint calendar (Figure 2). The public folder calendar is not displayed.

publicSharepoint2
Figure 2. Outlook displays your personal calendar and the SharePoint calendar.

The next step is to move the events from the public folder calendar and place them into the SharePoint calendar. Expand the list of calendars and choose the check boxes that correspond with the SharePoint calendar and the public folder calendar, as shown in Figure 3.

publicSharepoint3
Figure 3. After you hide your personal calendar, you can see public folder and SharePoint calendars.

Click the Change View icon and choose List. When you click on the public folder calendar, you will see a list of calendar entries within the public folder calendar (Figure 4).

publicSharepoint4
Figure 4. List view displays all the public folder calendar entries.

Finally, select all of the items in the list and drag them to your SharePoint calendar. This will copy all items to SharePoint (Figure 5).

publicSharepoint5
Figure 5. Events from the public folder calendar are now displayed in SharePoint.

How to migrate public folder tasks to SharePoint 2010
SharePoint 2010 considers tasks to be types of lists, so you should click on the Lists link. SharePoint 2010 already includes a built-in Tasks list (Figure 6).

publicSharepoint6
Figure 6. SharePoint 2010 has a built-in Tasks list.

Click on the ribbon’s List Tools -> List tab and you’ll see the Connect to Outlook icon. Once the list is connected to Outlook, open the public folder tasks and SharePoint tasks directly through the Outlook folder list, as shown in Figure 7.

publicSharepoint8
Figure 7. Access both the public folder tasks list and the SharePoint tasks list through Outlook’s Folder view.

To move tasks from the public folder to the SharePoint list, simply drag and drop them (Figure 8).

publicSharepoint9
Figure 8. The public folder tasks list has been migrated to SharePoint 2010.

Moving public folder contacts to SharePoint 2010
SharePoint doesn’t provide a default Contacts list, as shown in Figure 6. To create one, go to the front page and click the Lists link and then the Create link. SharePoint will ask you what type of list you want to create. Choose the Contacts option(Figure 9).

publicSharepoint10
Figure 9. SharePoint allows you to create a Contacts List.

After you’ve created the Contacts list, you can migrate your public folder contacts to SharePoint using the method used to migrate a Tasks list.

Posted in Windows 2008 | Tagged: , | Leave a Comment »

Setup FTP 7.5 on Windows Server 2008 and publish through Forefront TMG 2010

Posted by Alin D on November 2, 2010

Introduction

Microsoft has created a new FTP service that has been completely rewritten for Windows Server® 2008. This new FTP service incorporates many new features that enable web authors to publish content better than before, and offers web administrators more security and deployment options.

  • Integration with IIS 7: IIS 7 has a brand-new administration interface and configuration store, and the new FTP service is tightly integrated with this new design. The old IIS 6.0 metabase is gone, and a new configuration store that is based on the .NET XML-based *.config format has taken its place. In addition, IIS 7 has a new administration tool, and the new FTP server plugs seamlessly into that paradigm.
  • Support for new Internet standards: One of the most significant features in the new FTP server is support for FTP over SSL. The new FTP server also supports other Internet improvements such as UTF8 and IPv6.
  • Shared hosting improvements: By fully integrating into IIS 7, the new FTP server makes it possible to host FTP and Web content from the same site by simply adding an FTP binding to an existing Web site. In addition, the FTP server now has virtual host name support, making it possible to host multiple FTP sites on the same IP address. The new FTP server also has improved user isolation, now making it possible to isolate users through per-user virtual directories.
  • Custom authentication providers: The new FTP server supports authentication using non-Windows accounts for IIS Managers and .NET Membership.
  • Improved logging support: FTP logging has been enhanced to include all FTP-related traffic, unique tracking for FTP sessions, FTP sub-statuses, additional detail fields in FTP logs, and much more.
  • New supportability features: IIS 7 has a new option to display detailed error messages for local users, and the FTP server supports this by providing detailed error responses when logging on locally to an FTP server. The FTP server also logs detailed information using Event Tracing for Windows (ETW), which provides additional detailed information for troubleshooting.
  • Extensible feature set: FTP supports extensibility that allows you to extend the built-in functionality that ships with the FTP service. More specifically, there is support for creating your own authentication and authorization providers. You can also create providers for custom FTP logging and for determining the home directory information for your FTP users.

Additional information about new features in FTP 7.5 is available in the “What’s New for Microsoft and FTP 7.5?” topic on Microsoft’s http://www.iis.net/ web site.

This document will walk you through installing the new FTP service and troubleshooting installation issues.

Installing FTP for IIS 7.5

IIS 7.5 for Windows Server 2008 R2

  1. On the taskbar, click Start, point to Administrative Tools, and then click Server Manager.
  2. In the Server Manager hierarchy pane, expand Roles, and then click Web Server (IIS).
  3. In the Web Server (IIS) pane, scroll to the Role Services section, and then click Add Role Services.
  4. On the Select Role Services page of the Add Role Services Wizard, expand FTP Server.
  5. Select FTP Service. (Note: To support ASP.NET Membership or IIS Manager authentication for the FTP service, you will also need to select FTP Extensibility.)
  6. Click Next.
  7. On the Confirm Installation Selections page, click Install.
  8. On the Results page, click Close.

Installing FTP for IIS 7.0

Prerequisites

The following items are required to complete the procedures in this section:

  1. You must be using Windows Server 2008.
  2. Internet Information Services 7.0 must be installed.
  3. If you are going to manage the new FTP server by using the IIS 7.0 user interface, the administration tool will need to be installed.
  4. You must install the new FTP server as an administrator. (See the Downloading and Installing section for more.)
  5. IIS 7.0 supports a shared configuration environment, which must be disabled on each server in a web farm before installing the new FTP server for each node. Note: Shared configuration can be re-enabled after the FTP server had been installed.
  6. The FTP server that is shipped on the Windows Server 2008 DVD must be uninstalled before installing the new FTP server.
Downloading the right version for your server

There are two separate downloadable packages for the new FTP server; you will need to download the appropriate package for your version of Windows Server 2008:

Launching the installation package

You will need to run the installation package as an administrator. This can be accomplished by one of the following methods:

  1. Logging in to your server using the actual account named “Administrator”, then browsing to the download pages listed above or double-clicking the download package if you have saved it to your server.
  2. Logging on using an account with administrator privileges and opening a command-prompt by right-clicking the Command Prompt menu item that is located in the Accessories menu for Windows programs and selecting “Run as administrator”, then typing the appropriate command listed below for your version of Windows to run the installation:
    • 32-bit Windows Versions:
      • msiexec /i FTP 7_x86_75.msi
    • 64-bit Windows Versions:
      • msiexec /i FTP 7_x64_75.msi

Note: One of the above steps is required because the User Account Control (UAC) security component in the Windows Vista and Windows Server 2008 operating systems prevents access to your applicationHost.config file. For more information about UAC, please see the following documentation:

The following steps walk you through all of the required settings to add FTP publishing for the Default Web Site.

Walking through the installation process
  1. When the installation package opens, you should see the following screen. Click Next to continue.
    alt
  2. On the next screen, click the I accept check box if you agree to the license terms, and then click Next.
    alt
  3. The following screen lists the installation options. Choose which options you want installed from the list, and then click Next.
    • Common files: this option includes the schema file. When installing in a shared server environment, each server in the web farm will need to have this option installed.
    • FTP Publishing Service: this option includes the core components of the FTP service. This option is required for the FTP service to be installed on the server.
    • Managed Code Support: this is an optional component, but features that use managed extensibility require this option before using them, such as ASP.NET and IIS manager authentication. Note: This feature cannot be installed on Windows Server 2008 Core.
    • Administration Features: this option installs the FTP 7 management user interface. This requires the IIS 7.0 manager and .NET framework 2.0 to be installed. Note: This feature cannot be installed on Windows Server 2008 Core.
      alt
  4. On the following screen, click Install to begin installing the options that you chose on the previous screen.
    alt
  5. When installation has completed, click Read notes to view the FTP README file, or click Finish to close the installation dialog.
    alt

Note: If an error occurs during installation, you will see an error dialog. Refer to the Troubleshooting Installation Issues section of this document for more information.

Troubleshooting Installation Issues

When the installation of FTP 7 fails for some reason, you should see a dialog with a button called “Installation log”. Clicking the “Installation log” button will open the MSI installation log that was created during the installation. You can also manually enable installation logging by running the appropriate command listed below for your version of Windows. This will create a log file that will contain information about the installation process:

  • 32-bit Windows Versions:
    • msiexec /L FTP 7.log /I FTP 7_x86_75.msi
  • 64-bit Windows Versions:
    • msiexec /L FTP 7.log /I FTP 7_x64_75.msi

You can analyze this log file after a failed installation to help determine the cause of the failure.

Clicking the “Online information” button on the error dialog will launch the “Installing and Troubleshooting FTP 7.5” document in your web browser.

Note: If you attempt to install the downloaded package on an unsupported platform, the following dialog will be displayed:

Known Issues in This Release

The following issues are known to exist in this release:

  1. While Web-based features can be delegated to remote managers and added to web.config files using the new IIS 7 configuration infrastructure, FTP features cannot be delegated or stored in web.config files.
  2. The icon of a combined Web/FTP site may be marked with a question mark even though the site is currently started with no error. This occurs when a site has a mixture of HTTP/FTP bindings.
  3. After adding an FTP publishing to a Web site, clicking the site’s node in the tree view of the IIS 7 management tool may not display the FTP icons. To work around this issue, use one of the following:
    • Hit F5 to refresh the IIS 7 management tool.
    • Click on the Sites node, then double-click on the site name.
    • Close and re-open the IIS 7 management tool.
  4. When you add a custom provider in the site defaults, it shows up under each site. However, if you attempt to remove or modify the settings for a custom provider at the site-level, IIS creates an empty <providers /> section for the site, but the resulting configuration for each site does not change. For example, if the custom provider is enabled in the site defaults, you cannot disable it at the site-level. To work around this problem, open your applicationHost.config file as an administrator and add a <clear/> element to the list of custom authentication providers, the manually add the custom provider to your settings. For example, in order to add the IIS Manager custom authentication provider, you would add settings like the following example:
    <ftpServer>
    <security>
    <authentication>
    <customAuthentication>
    <providers>
    <clear />
    <add name=”IisManagerAuth” enabled=”true” />
    </providers>
    </customAuthentication>
    </authentication>
    </security>
    </ftpServer>
  5. The following issues are specific to the IIS 7.0 release:
    • The FTP service that is shipped on the Windows Server 2008 DVD should not be installed after the new FTP service has been installed. The old FTP service does not detect that the new FTP service has been installed, and running both FTP services at the same may cause port conflicts.
    • IIS 7 can be uninstalled after the new FTP service has been installed, and this will cause the new FTP service to fail. If IIS is reinstalled, new copies of the IIS configuration files will be created and the new FTP service will continue to fail because the configuration information for the new FTP service is no longer in the IIS configuration files. To fix this problem, re-run the setup for the new FTP service and choose “Repair”.

To Add FTP Site from the IIS management Console

Creating a New FTP Site Using IIS 7 Manager

The new FTP service makes it easy to create new FTP sites by providing you with a wizard that walks you through all of the required steps to create a new FTP site from scratch.

Step 1: Use the FTP Site Wizard to Create an FTP Site

In this first step you will create a new FTP site that anonymous users can open.

Note: The settings listed in this walkthrough specify “%SYSTEMDRIVE%inetpubftproot” as the path to your FTP site. You are not required to use this path; however, if you change the location for your site you will have to change the site-related paths that are used throughout this walkthrough.

  1. Open IIS 7 Manager. In the Connections pane, click the Sites node in the tree.
  2. As shown in the image below, right-click the Sites node in the tree and click Add FTP Site, or click Add FTP Site in the Actions pane.
    • Create a folder at “%SystemDrive%inetpubftproot”
    • Set the permissions to allow anonymous access:
      1. Open a command prompt.
      2. Type the following command:
        ICACLS "%SystemDrive%inetpubftproot" /Grant IUSR:R /T
      3. Close the command prompt.

    alt

  3. When the Add FTP Site wizard appears:
    • Enter “My New FTP Site” in the FTP site name box, then navigate to the %SystemDrive%inetpubftproot folder that you created in the Prerequisites section. Note that if you choose to type in the path to your content folder, you can use environment variables in your paths.
    • When you have completed these items, click Next.

    alt

  4. On the next page of the wizard:
    • Choose an IP address for your FTP site from the IP Address drop-down, or choose to accept the default selection of “All Unassigned.” Because you will be using the administrator account later in this walk-through, you must ensure that you restrict access to the server and enter the local loopback IP address for your computer by typing “127.0.0.1” in the IP Address box. (Note: If you are using IPv6, you should also add the IPv6 localhost binding of “::1”.)
    • Enter the TCP/IP port for the FTP site in the Port box. For this walk-through, choose to accept the default port of 21.
    • For this walk- through, do not use a host name, so make sure that the Virtual Host box is blank.
    • Make sure that the Certificates drop-down is set to “Not Selected” and that the Allow SSL option is selected.
    • When you have completed these items, click Next.

    alt

  5. On the next page of the wizard:
    • Select Anonymous for the Authentication settings.
    • For the Authorization settings, choose “Anonymous users” from the Allow access to drop-down, and select Read for the Permissions option.
    • When you have completed these items, click Finish.

    alt

Summary

You have successfully created a new FTP site using the new FTP service. To recap the items that you completed in this step:

  1. You created a new FTP site named “My New FTP Site”, with the site’s content root at “%SystemDrive%inetpubftproot”.
  2. You bound the FTP site to the local loopback address for your computer on port 21, and you chose not to use Secure Sockets Layer (SSL) for the FTP site.
  3. You created a default rule for the FTP site to allow anonymous users “Read” access to the files.
Step 2: Adding Additional FTP Security Settings

Creating a new FTP site that anonymous users can browse is useful for public download sites, but web authoring is equally important. In this step, you add additional authentication and authorization settings for the administrator account. To do so, follow these steps:

  1. In IIS 7 Manager, click the node for the FTP site that you created earlier, then double-click FTP Authentication to open the FTP authentication feature page.
    alt
  2. When the FTP Authentication page displays, highlight Basic Authentication and then click Enable in the Actions pane.
    alt
  3. In IIS 7 Manager, click the node for the FTP site to re-display the icons for all of the FTP features.
  4. You must add an authorization rule so that the administrator can log in. To do so, double-click the FTP Authorization Rules icon to open the FTP authorization rules feature page.
    alt
  5. When the FTP Authorization Rules page is displayed, click Add Allow Rule in the Actions pane.
    alt
  6. When the Add Allow Authorization Rule dialog box displays:
    • Select Specified users, then type “administrator” in the box.
    • For Permissions, select both Read and Write.
    • When you have completed these items, click OK.
      alt
Summary

To recap the items that you completed in this step:

  1. You added Basic authentication to the FTP site.
  2. You added an authorization rule that allows the administrator account both “Read” and “Write” permissions for the FTP site.
Step 3: Logging in to Your FTP Site

In Step 1, you created an FTP site that anonymous users can access, and in Step 2 you added additional security settings that allow an administrator to log in. In this step, you log in anonymously using your administrator account.

Note: In this step log in to your FTP site using the local administrator account. When creating the FTP site in Step 1 you bound the FTP site to the local loopback IP address. If you did not use the local loopback address, use SSL to protect your account settings. If you prefer to use a separate user account instead of the administrator account, set the correct permissions for that user account for the appropriate folders.

Logging in to your FTP site anonymously
  1. On your FTP server, open a command prompt session.
  2. Type the following command to connect to your FTP server:FTP localhost
  3. When prompted for a user name, enter “anonymous”.
  4. When prompted for a password, enter your email address.

You should now be logged in to your FTP site anonymously. Based on the authorization rule that you added in Step 1, you should only have Read access to the content folder.

Logging in to your FTP site using your administrator account
  1. On your FTP server, open a command prompt session.
  2. Type the following command to connect to your FTP server:FTP localhost
  3. When prompted for a user name, enter “administrator”.
  4. When prompted for a password, enter your administrator password.

You should now be logged in to your FTP site as the local administrator. Based on the authorization rule that you added in Step 2 you should have both Read and Write access to the content folder.

Summary

To recap the items that you completed in this step:

  1. You logged in to your FTP site anonymously.
  2. You logged in to your FTP site as the local administrator.

Publish FTP site from Forefront TMG 2010

Let’s begin

Note:
Keep in mind that the information in this article is based on a release candidate version of Microsoft Forefront TMG and is subject to change.

A few months ago, Microsoft released RC 1 (Release Candidate) of Microsoft Forefront TMG (Threat Management Gateway), which has a lot of new exciting features.

One of the new features of Forefront TMG is its ability to allow FTP server traffic through the Firewall in both directions. It does this in the form of Firewall access rules for outbound FTP access and with server publishing rules for inbound FTP access through a published FTP Server. This server is located in your internal network or a perimeter network, also known as a DMZ (if you are not using public IP addresses for the FTP Server in the DMZ).

First, I will show you the steps you will need to follow in order to create a Firewall rule which will allow FTP access for outgoing connections through TMG.

FTP access rule

Create a new access rule which allows the FTP protocol for your clients. If you want to allow FTP access for your clients, the clients must be Secure NAT or TMG clients, also known as the Firewall client in previous versions of Forefront TMG.

Please note:
If you are using the Web proxy client, you should note that through this type of client only FTP read-only access is possible and you cannot use a classic FTP client for FTP access, only a web browser FTP access is possible with some limitations.

The following picture shows a FTP access rule.

alt
Figure 1: FTP access rule

A well-known pitfall beginning with ISA Server 2004 is, that by default, after the FTP access rule has been created, the rule only allows FTP read-only access for security purposes in order to prevent users from uploading confidential data outside the organization without permission. If you want to enable FTP uploads you have to right click on the FTP access rule, and then click Configure FTP.

alt
Figure 2: Configure FTP

All you have to do is remove the read only flag, wait for the new FTP connection to be established, and the users get all the necessary permissions to carry out FTP uploads.

alt
Figure 3: Allow write access through TMG

FTP Server publishing

If you want to allow incoming FTP connections to your internal FTP servers, or to FTP servers located in the DMZ, you have to create server publishing rules if the network relationship between the external and the internal/DMZ network is NAT. If you are using a route network relationship, it is possible to use Firewall rules to allow FTP access.

To gain access to an FTP server in your internal network, create an FTP server publishing rule.

Simply start the new Server Publishing Rule Wizard and follow the instructions.

As the protocol you have to select the FTP Server protocol definition which allows inbound FTP access.

alt
Figure 4: Publish the FTP-Server protocol

The standard FTP Server protocol definiton uses the associated standard protocol which can be used for inspection by NIS, if a NIS signature is available.

alt
Figure 5: FTP-Server protocol properties

The Standard FTP Server protocol definition allows FTP Port 21 TCP for inbound access and the protocol definition is bound to the FTP access filter which is responsible for the FTP protocol port handling (FTP Data and FTP control port).

alt
Figure 6: FTP ports and FTP Access Filter binding

Active FTP

One of the changes in Microsoft Forefront TMG is that the Firewall does not allow Active FTP connections by default anymore, for security reasons. You have to manually allow the use of Active FTP connections. It is possible to enable this feature in the properties of the FTP access filter. Navigate to the system node in the TMG management console, select the Applicaton Filters tab, select the FTP Access filter and in the task pane click Configure Selected Filter (Figure 7).

alt
Figure 7: FTP Access filter properties

In the FTP access filter properties select the FTP Properties tab and enable the checkbox Allow Active FTP Access and save the configuration to the TMG storage.

alt
Figure 8: Allow Active FTP through TMG

FTP alerts

Forefront TMG comes with a lot of predefined alert settings for several components and events. One of them is the alert function for the FTP Filter Initialization Warning. This alert informs Administrator when the FTP filter failed to parse the allowed FTP commands.

alt
Figure 9: Configure FTP alert options

The alert actions are almost the same as in ISA Server 2006, so there are no new things to explain for experienced ISA Administrators.

Conclusion

In this article, I showed you some ways to allow FTP access through the TMG Server. There are some pitfalls for a successful FTP implementation. One of the pitfalls is that since the introduction of ISA Server 2004, allowing FTP write access through the Firewall and the other pitfall is new to Forefront TMG. Forefront TMG does not allow Active Mode FTP connections by default, so you have to manually activate this feature if you really need this type of special configuration.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Configure Forefront TMG 2010 as WPAD server (Auto Proxy Discovery)

Posted by Alin D on October 18, 2010

WPAD stands for Web Proxy Auto-Discovery Protocol. WPAD contains the information proxy settings for clients. Windows client uses WPAD protocol to obtain proxy information from DHCP and DNS server. Clients query for WPAD entry and returns with address of WPAD server in which WPAD.dat or Wspad.dat is stored. WPAD server can be a Forefront TMG server or an separate IIS server holding WPAD.dat or wspad.dat URL. Configuring a WPAD server is pretty simple as described in the following steps:

  1. Select and configure an automatic discovery mechanism.
  2. Implement a WPAD server and DNS or Implement a WPAD Server and DHCP.
  3. Configure automatic discovery through GPO for Windows client computers

What’s in WPAD.dat and WSPAD.dat file? The Wpad.dat file is a Microsoft JScript® file used by the Web client browser to set browser settings. Wpad.dat contains the following information:

  • The proxy server that should be used for client requests.
  • Domains and IP addresses that should be accessed directly, bypassing the proxy.
  • An alternate route in case the proxy is not available.
  • TMG Enterprise Server, Wpad.dat provides a list of all servers in the array

In the TMG Server WSPAD implementation uses the WPAD mechanism, and constructs the Wspad.dat file to provide the client with proxy settings, and some additional Firewall client configuration information not required for automatic detection. The relevant automatic detection entries in Wspad.dat are the server name and port name.

Configure WPAD Entry in an authoritive DHCP Server:

Click Start, point to All Programs, point to Administrative Tools, and then click DHCP.

In the console tree, right-click the applicable DHCP server, click Set Predefined Options, and then click Add.

1 2

In Name, type WPAD. In Code, type 252. In Data type, select String, and then click OK.

3

In String, type http://Computer_Name:Port/wpad.dat where Port is the port number on which automatic discovery information is published. You can specify any port number. By default, Forefront TMG publishes automatic discovery information on port 8080. Ensure that you use lowercase letters when typing wpad.dat. Forefront TMG uses wpad.dat and is case sensitive.

46

Right-click Scope Options, and then click Configure options. Confirm that Option 252 is selected.

57

Note: Assign the primary domain name to clients using DHCP. A DHCP server can be configured with a DHCP scope option to supply DHCP clients with a primary domain name. You can use port 8080 if you are using DHCP to deliver WPAD. Most corporate uses port for so many web application or primary web site. My preferred method is to deliver WPAD using DHCP.

Configuring WPAD Entry in Active Directory DNS (AD DS):

Click Start, point to All Programs, point to Administrative Tools, and then click DNS.

In the console tree, right-click the forward lookup zone for your domain, and click New Alias (CNAME).

8

In Alias name, type WPAD.

9

In Fully qualified name for target host, type the FQDN of the WPAD server. If the Forefront TMG computer or array already has a host (A) record defined, you can click Browse to search the DNS namespace for the Forefront TMG server name.

10

Note: If clients belong to multiple domains, you will need a DNS entry for each domain. Firewall clients should be configured to resolve the WPAD entry using an internal DNS server. For WPAD entries obtained from DNS, the WPAD server must listen on port 80. Do NOT configure CNAME entry in AD DS if you are using DHCP to deliver WPAD.

Important! Use ONLY one deliver method that means either DNS or DHCP
Configuring TMG Server as the WPAD Server: You can configure Forefront TMG as the WPAD server as follows

In the console tree of Forefront TMG Management, click Networking. In the details pane, click the Networks tab, and then select the network on which you want to listen for WPAD requests from clients (usually the default Internal network).

22

On the Tasks tab, click Edit Selected Network.

On the Auto Discovery tab, select Publish automatic discovery information.

In Use this port for automatic discovery requests, specify the port on which the Forefront TMG WPAD server should listen for WPAD requests from clients.

23

Click on Forefront TMG Client Tab, Check Enable Forefront TMG Client Support for this network, by default TMG server name will selected in this option, for TMG Enterprise Edition, you can select any Array Member hosting WPAD. Check Automatically Detect Settings, Check Use Automatic configuration script and select Use Default URL, Check Use a web proxy server. You may select one of the following:

24

  • Use default URL. Forefront TMG provides a default configuration script at the location http://FQDN:8080/array.dll?Get.Routing.Script, where the FQDN is that of the Forefront TMG computer. This script contains the settings specified on the Web Browser tab of the network properties.
  • Use custom URL. As an alternative to the default script, you can construct your own Proxy Auto-Configuration (PAC) file and place it on a Web server. When the client Web browser looks for the script at the specified URL, the Web server receives the request and returns the custom script to the client.

25

Apply Changes, Click ok.

To run the AD Marker tool for automatic detection: Use this tools if you use active directory as deliver mechanism.

To store the marker key in Active Directory, at the command prompt, type:

TmgAdConfig.exe add -default -type winsock -url <service-url> [-f] where:

The service-url entry should be in the format http://<TMG Server Name>:8080/wspad.dat.

The following parameters can be used in the commands:

To delete a key from Active Directory, at a command line prompt, type:TmgAdConfig.exe del -default -type winsock

To configure the Active Directory marker for a specific site, use the –site command line parameter.

For a complete list of options, type TmgAdConfig.exe -?

For detailed usage information, type TmgAdConfig.exe <command> -help

The TmgAdConfig tool creates the following registry key in Active Directory: LDAP://Configuration/Services/Internet Gateway(“Container”) /Winsock Proxy(“ServiceConnectionPoint”)

The key’s server binding information will be set to <service-url>. This key will be retrieved by the Forefront TMG Client and will be used to download the wspad configuration file.

Configuring an Alternative WPAD Server: An alternative configuration is to place the Wpad.dat and Wspad.dat files on another computer instead of on the TMG Server computer. For example, you can place the files on a server running IIS. In such a configuration, the DNS and DHCP entries point to the computer running IIS, and this computer acts as a dedicated redirector to provide WPAD and WSPAD information to clients. The simplest way to download the Wpad.dat and Wspad.dat files is to connect to the TMG Server computer through a Web browser and obtain the files from the following URLs:

31 32

33

Configuring Internet Explorer for Automatic Discovery in a single computer: Configure WPAD for automatic detection for DHCP delivery method as follows:

  1. In Internet Explorer, click the Tools menu, and then click Internet Options.
  2. On the Connections tab, click LAN Settings.
  3. On the Local Area Network (LAN) Settings tab, select Automatically detect settings.

image

Enabling browsers for automatic detection using a static/custom configuration script

  1. In Internet Explorer, click the Tools menu, and then click Internet Options.
  2. On the Connections tab, click LAN Settings.
  3. On the Local Area Network (LAN) Settings tab, select Use automatic configuration script. Enter the script location as http://fqdnserver:port/array.dll?Get.Routing.Script. Where fqdnserver is the fully qualified domain name (FQDN) of the Forefront TMG server. The configuration script location can be specified in each browser, or it can be set for all clients who use Group Policy.

1920

21

To export the settings from your computer to an .ins file using IEM

In Group Policy, double-click Local Computer Policy, double-click User Configuration, and then double-click Windows Settings.

28

Right-click Internet Explorer Maintenance, and then click Export Browser Settings.

29

Enter the location and name of the .ins file that you want to use.

30

Copy this WPAD.INS file and host this in a separate IIS server.

Configure Automatic Detection through GPO for entire Windows fleet

Log on to Domain Controller as an administrator.

Open Group Policy Object Management Console, Select desired Organisational Unit, Right Click, Click on Create a GPO in this Domain and in it here

Type the Name of the GPO, Click ok

11 12

Right mouse click on newly created GPO, Click on Edit,

Expand GPO editor to User Configuration>Windows Settings>Internet Explorer Maintenance>Connections>Double Click Automatic Browser Configuration

13 14

If you decide to use DHCP as WPAD.dat delivery method then check Automatic Detect Configuration Settings.

15

If you decide to default Routing Script from TMG server

16

If you want to deliver wpad.dat through DNS server use the following option

17

For WPAD.INS deployment use the following option

18

In the automatic configure every ~ minutes, you can setup time and type 0 (zero) for auto update after restart.

Testing Automatic Detection

To test DHCP delivery method, Log on to a client machine. Open IE8 and setup IE Proxy settings as Automatically detect setting

Run GPUPDATE.exe /Force and reboot computer

21

Browse any websites to test proxy is detected by browser.

27

For a WPAD entry in DNS, you can test the automatic discovery mechanism by typing the following in the Web browser:

For a WPAD entry in DHCP, you specify the FQDN of the WPAD server. For example, if the WPAD DHCP entry is available on an TMG Server computer, type the following:

To test that the automatic configuration script is being retrieved as expected, type the following in the Web browser:

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Chrome makes play for minds and hearts of administrators

Posted by Alin D on October 7, 2010

Google celebrated the two-year anniversary of its Chrome web browser this month by making some changes to it designed to encourage administrators to cast a more approving eye on the software.

Generally, administrators are a tough lot when it comes to change. Their plates are usually full and it takes a compelling sell to persuade them to desert the status quo. If Microsoft has difficulties weaning many administrators from Internet Explorer 6, with its horrendous security record, to a safer version of the software, how does Google expect to induce administrators to bolt to an entirely new web browser?

One way, it appears, is to give administrators greater control over how Chrome behaves. For example, it allows administrators to cut off a feature that allows the browser to automatically update itself. Automatic updates are convenient for users. They can fix annoying problems that can have dire consequences for a computer’s operation or its data. More important, they can plug security holes in a program.

The problem for administrators, however, is that they can create unforeseen snags on a user’s system or even open up new security holes. If an administrator can evaluate the update before it’s implemented, he or she can prevent those problems from developing. Automatic updates can preempt such an evaluation and spread those potential hassles throughout an organization’s system like a virus. In addition, updates can give hackers an entry point into a network. Once a system’s defense systems are trained to accept automatic updates, they will ignore programs that behave like updates–even if those programs are malware written by hackers. What’s more, crackers can intercept requests for updates–through techniques like DNS hacking–and install older updates that will re-open old software flaws.

To turn off automatic updates for Chrome, Google recommends that the value of the following Windows registry key–HKEY_LOCAL_MACHINESOFTWAREPoliciesGoogleUpdateAutoUpdateCheckPeriodMinutes–be set to  REG_SZ (string) value of “0″.

As part of Chrome’s new administrator-friendly attitude, registry changes need not be made manually but can be made with easy-to-use templates.

If an administrator does choose to turn off automatic updates, Google cautions him or her to keep in mind that such action means his or her organization will not receive the latest security updates for the browser.

Administrators can now also change the policies that Chrome respects. They include:

  • Setting the browser’s home page.
  • Determining if a new tab is created when the home page button is clicked in the browser.
  • Enabling or disabling safe browser mode.
  • Determining if error pages will appear in the browser.
  • Activating or deactivating Google Suggest, which recommends a completed URL for a partially typed in URL in the browser’s address field.
  • Determining if anonymous statistics reporting and crash information should be reported back to Google.
  • Enabling or disabling DNS prefetching.
  • Enabling or disabling online saving of bookmarks or other profile information through synchronization.
  • Determining the manner in which Chrome determines the proxy server in use.
  • Specifying the URL of the proxy server in use when a specified proxy configuration has been created manually.
  • Specifying the URL of the .pac file to use when the specified proxy configuration is created manually.
  • Creating a list of exceptions for when not to use a proxy.
  • Overriding a system’s user interface language.
  • Creating a list of disabled plug-ins.

As extensive as that policy list is, there are a few omissions that administrators may like to see in the future, according to Lee Mathews, writing for DownloadSquad. “For example, while I can choose to disable certain plug-ins, there’s no switch to disallow extension installs,” he scribbled. “I’d also like to disable Chrome’s autofill feature, but it, too, is missing.”

Getting administrators to embrace Chrome could be a key to the browser’s success and the advancement of Google’s overall goals for the Internet. “Chrome has caught on among early adopters and has tens of millions of users,” opined Stephen Shankland in his DeepTech blog on Cnet. “Getting corporate buy-in could help the browser’s prospects, and with it Google’s ambition to make the Web a more powerful foundation for applications rather than just Web pages to visit.”

“Even with easier compatibility, though, corporate IT personnel are not known for their enthusiasm for embracing new software,” he added. “They’re often naturally conservative, since change can break internal applications, confuse users, and bring other complications. Letting administrators set Chrome behavior will, though, make it more palatable.”

Posted in Exchange | Tagged: , , , , , , , , , , , , , , | Leave a Comment »

Common Storage Configurations

Posted by Alin D on September 20, 2010

Introduction

In today’s world everything is on computers. More specifically, everything is stored on storage devices which are attached to computers in a number of configurations. There are many ways in which these devices can be accessed by users. Some are better than others and some are best for certain situations; in this article I will give an overview of some of these ways and describe some situations where one might want to implement them.

Firstly there is an architecture called Directly Attached Storage (DAS). This is what most people would think of when they think of storage devices. This type of architecture includes things like internal hard drives, external hard drives, and USB keys. Basically DAS refers to anything that attaches directly to a computer (or a server) without any network component (like a network switch) between them.


Figure 1: Three configurations for Direct Attached Storage solutions (Courtesy of ZDNetasia.com)

A DAS device can even accommodate multiple users concurrently accessing data. All that is required is that the device have multiple connection ports and the ability to support concurrent users. DAS configurations can also be used in large networks when they are attached to a server which allows multiple users to access the DAS devices. The only thing that DAS excludes is the presence of a network device between the storage device and the computer.

Many home users or small businesses require Network Attached Storage (NAS). NAS devices offer the convenience of centrally locating your storage devices, though not necessarily located with your computers. This feature is convenient for home users who may want to store their storage devices in their basement while roaming about their house with their laptop. This feature is equally appealing to small businesses where it may not be appropriate to have large storage devices where clients or customers present. DAS configurations could also provide this feature, though not as easily or elegantly for smaller implementations.


Figure 2: Diagram of a Network Attached Storage system (Courtesy of windowsnas.com)

A NAS device is basically a stripped down computer. Though they don’t have monitors or keyboards they do have stripped down operating systems which you can configure, usually by connecting to the device via a web browser from a networked computer. NAS operating systems are typically stripped down versions of UNIX operating systems, such as the open source FreeNAS which is a stripped down version of FreeBSD. FreeNAS supports many file formats such as CIFS, FTP, NFS, TFTP, AFP, RSYNC, and iSCSI. Since FreeNAS is open source you’re also free to add your own implementation of any protocol you wish. In a future article I will provide more in-depth information on these protocols; so stay tuned.

Because NAS devices handle the file system functions themselves, they do not need a server to handle these functions for them. Networks that employ DAS devices attached to a server will require the server to handle the file system functions. This is another advantage of NAS over DAS. NAS “frees up” the server to do other important processing tasks because a NAS device is connected directly to the network and handles all of the file serving itself. This also means that a NAS device can be simpler to configure and maintain for smaller implementations because they won’t require a dedicated server.

NAS systems commonly employ RAID configurations to offer users a robust storage solution. In this respect NAS devices can be used in a similar manner as DAS devices (for robust data backup). The biggest, and most important, difference between NAS systems and DAS systems are that NAS systems contain at least one networking device between the end users and the NAS device(s).

NAS solutions are similar to another storage configuration called Storage Area Networks (SAN). The biggest difference between a NAS system and a SAN system is that a NAS device handles the file system functions of an operating system while a SAN system provides only block-based storage services and leaves the file system functions to be performed by the client computer.

Of course, that’s not to say that NAS can’t be employed in conjunction with SAN. In fact, large networks often employ SAN with NAS and DAS to meet the diverse needs of their network users.

One advantage that SAN systems have over NAS systems is that NAS systems are not as readily scalable. SAN systems can quite easily add servers in a cluster to handle more users. NAS systems employed in networks where the networks are growing rapidly are often incapable of handling the increase in traffic, even if they can handle the storage capacity.

This doesn’t mean that NAS systems are scalable. You can in fact, cluster NAS devices in a similar manner to how one would cluster servers in a SAN system. Doing this still allows full file access from any node in the NAS cluster. But just because something can be done, doesn’t mean it should be done; if you’re thinking of going down this path tread carefully – I would recommend implementing a SAN solution instead.


Figure 3: Diagram of a Storage Area Network (Courtesy of anildesai.net)

However, NAS systems are typically less expensive than SAN systems and in recent years NAS manufacturers have concentrated on expanding their presence on home networks where many users have high storage demands for multimedia files. For most home users a less expensive NAS system which doesn’t require a server and rack space is a much more attractive solution when compared with implementing a SAN configuration.

SAN systems have many advantages over NAS systems. For instance, it is quite easy to replace a faulty server in a SAN system whereas is it much more difficult to replace a NAS device which may or may not be clustered with other NAS devices. It is also much easier to geographically distribute storage arrays within a SAN system. This type of geographic distribution is often desirable for networks wanting a disaster tolerant solution.

The biggest advantage of SAN systems is that they offer simplified management, scalability, flexibility, and improved data access and backup. For this reason SAN configurations are becoming quite common for large enterprises that take their data storage seriously.

Apart from large networks SAN configurations are not very common. One exception to this is is in the video editing industries which require a high capacity storage environment along with a high bandwidth for data access. A SAN configuration using Fibre Channel is really the best solution for video editing networks and networks in similar industries.

While any of these three configurations (DAS, NAS, and SAN) can address the needs of most networks, putting a little bit of thought into the network design can save a lot of future effort as the network grows or the need arises to upgrade various aspects of the network. Choosing the right configuration is important, you need to choose a configuration that meets your networks current needs and any predictable needs of the near to medium term future.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , | Comments Off on Common Storage Configurations

10 Core Concepts that Every Windows Network Admin Must Know

Posted by Alin D on September 13, 2010

Introduction

I thought that this article might be helpful for Windows Network Admins out there who need some “brush-up tips” as well as those who are interviewing for network admins jobs to come up with a list of 10 networking concepts that every network admin should know.

So, here is my list of 10 core networking concepts that every Windows Network Admin (or those interviewing for a job as one) must know:

1.     DNS Lookup

The domain naming system (DNS) is a cornerstone of every network infrastructure. DNS maps IP addresses to names and names to IP addresses (forward and reverse respectively). Thus, when you go to a web-page like http://www.windowsnetworking.com, without DNS, that name would not be resolved to an IP address and you would not see the web page. Thus, if DNS is not working “nothing is working” for the end users.

DNS server IP addresses are either manually configured or received via DHCP. If you do an IPCONFIG /ALL in windows, you will see your PC’s DNS server IP addresses.


Figure 1: DNS Servers shown in IPCONFIG output

So, you should know what DNS is, how important it is, and how DNS servers must be configured and/or DNS servers must be working for “almost  anything” to work.

When you perform a ping, you can easily see that the domain name is resolved to an IP (shown in Figure 2).


Figure 2: DNS name resolved to an IP address

For more information on DNS servers, see Brian Posey’s article on DNS Servers.

2.     Ethernet & ARP

Ethernet is the protocol for your local area network (LAN). You have Ethernet network interface cards (NIC) connected to Ethernet cables, running to Ethernet switches which connect everything together. Without a “link light” on the NIC and the switch, nothing is going to work.

MAC addresses (or Physical addresses) are unique strings that identify Ethernet devices. ARP (address resolution protocol) is the protocol that maps Ethernet MAC addresses to IP addresses. When you go to open a web page and get a successful DNS lookup, you know the IP address. Your computer will then perform an ARP request on the network to find out what computer (identified by their Ethernet MAC address, shown in Figure 1 as the Physical address) has that IP address.

3.     IP Addressing and Subnetting

Every computer on a network must have a unique Layer 3 address called an IP address. IP addresses are 4 numbers separated by 3 periods like 1.1.1.1.

Most computers receive their IP address, subnet mask, default gateway, and DNS servers from a DHCP server. Of course, to receive that information, your computer must first have network connectivity (a link light on the NIC and switch) and must be configured for DHCP.

You can see my computer’s IP address in Figure 1 where it says IPv4 Address 10.0.1.107. You can also see that I received it via DHCP where it says DHCP Enabled YES.

Larger blocks of IP addresses are broken down into smaller blocks of IP addresses and this is called IP subnetting. I am not going to go into how to do it and you do not need to know how to do it from memory either (unless you are sitting for a certification exam) because you can use an IP subnet calculator, downloaded from the Internet, for free.

4.     Default Gateway

The default gateway, shown in Figure 3 as 10.0.1.1, is where your computer goes to talk to another computer that is not on your local LAN network. That default gateway is your local router. A default gateway address is not required but if it is not present you would not be able to talk to computers outside your network (unless you are using a proxy server).


Figure 3: Network Connection Details

5.     NAT and Private IP Addressing

Today, almost every local LAN network is using Private IP addressing (based on RFC1918) and then translating those private IPs to public IPs with NAT (network address translation). The private IP addresses always start with 192.168.x.x or 172.16-31.x.x or 10.x.x.x (those are the blocks of private IPs defined in RFC1918).

In Figure 2, you can see that we are using private IP addresses because the IP starts with “10”. It is my integrated router/wireless/firewall/switch device that is performing NAT and translating my private IP to my public Internet IP that my router was assigned from my ISP.

6.     Firewalls

Protecting your network from malicious attackers are firewalls. You have software firewalls on your Windows PC or server and you have hardware firewalls inside your router or dedicated appliances. You can think of firewalls as traffic cops that only allow certain types of traffic in that should be in.

For more information on Firewalls, checkout our Firewall articles.

7.     LAN vs WAN

Your local area network (LAN) is usually contained within your building. It may or may not be just one IP subnet. Your LAN is connected by Ethernet switches and you do not need a router for the LAN to function. So, remember, your LAN is “local”.

Your wide area network (WAN) is a “big network” that your LAN is attached to. The Internet is a humongous global WAN. However, most large companies have their own private WAN. WANs span multiple cities, states, countries, and continents. WANs are connected by routers.

8.     Routers

Routers route traffic between different IP subnets. Router work at Layer 3 of the OSI model. Typically, routers route traffic from the LAN to the WAN but, in larger enterprises or campus environments, routers route traffic between multiple IP subnets on the same large LAN.

On small home networks, you can have an integrated router that also offers firewall, multi-port switch, and wireless access point.

For more information on Routers, see Brian Posey’s Network Basics article on Routers.

9.     Switches

Switches work at layer 2 of the OSI model and connect all the devices on the LAN. Switches switch frames based on the destination MAC address for that frame. Switches come in all sizes from small home integrated router/switch/firewall/wireless devices, all the way to very large Cisco Catalyst 6500 series switches.

10. OSI Model encapsulation

One of the core networking concepts is the OSI Model. This is a theoretical model that defines how the various networking protocols, which work at different layers of the model, work together to accomplish communication across a network (like the Internet).

Unlike most of the other concepts above, the OSI model isn’t something that network admins use every day. The OSI model is for those seeking certifications like the Cisco CCNA or when taking some of the Microsoft networking certification tests. OR, if you have an over-zealous interviewer who really wants to quiz you.

To fulfill those wanting to quiz you, here is the OSI model:

  • Application – layer 7 – any application using the network, examples include FTP and your web browser
  • Presentation – layer 6 – how the data sent is presented, examples include JPG graphics, ASCII, and XML
  • Session – layer 5 – for applications that keep track of sessions, examples are applications that use Remote Procedure Calls (RPC) like SQL and Exchange
  • Transport – layer 4 -provides reliable communication over the network to make sure that your data actually “gets there” with TCP being the most common transport layer protocol
  • Network – layer 3 -takes care of addressing on the network that helps to route the packets with IP being the most common network layer protocol. Routers function at Layer 3.
  • Data Link – layer 2 -transfers frames over the network using protocols like Ethernet and PPP. Switches function at layer 2.
  • Physical – layer 1 -controls the actual electrical signals sent over the network and includes cables, hubs, and actual network links.

At this point, let me stop degrading the value of the OSI model because, even though it is theoretical, it is critical that network admins understand and be able to visualize how every piece of data on the network travels down, then back up this model. And how, at every layer of the OSI model, all the data from the layer above is encapsulated by the layer below with the additional data from that layer. And, in reverse, as the data travels back up the layer, the data is de-encapsulated.

By understanding this model and how the hardware and software fit together to make a network (like the Internet or your local LAN) work, you can much more efficiently troubleshoot any network. For more information on using the OSI model to troubleshoot a network, see my articles Choose a network troubleshooting methodology and How to use the OSI Model to Troubleshoot Networks.

Summary

I can’t stress enough that if you are interviewing for any job in IT, you should be prepared to answer networking questions. Even if you are not interviewing to be a network admin, you never know when they will send a senior network admin to ask you a few quiz questions to test your knowledge. I can tell you first hand, the questions above are going to be the go-to topics for most network admins to ask you about during a job interview. And, if you are already a windows network admin, hopefully this article serves as an excellent overview of the core networking concepts that you should know. While you may not use these every day, knowledge of these concepts is are going to help you troubleshoot networking problems faster.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Implementing VDI using Windows Server 2008 R2 Remote Desktop Services and Hyper-V

Posted by Alin D on September 10, 2010

RDS VDI Functionality

An RDS-based VDI provides an environment that enables central storage, execution, and management of Windows desktops in a datacenter. An RDS VDI solution supports the creation and management of virtual desktop pools, and personal virtual desktops.

A virtual desktop pool is a collection of identical virtual desktops that are available for connection by multiple users. This type of virtual desktop does not maintain user state; instead it reverts back to its original state when a user logs off. A virtual desktop pool is applicable to users with the following characteristics:

  • Do not need a personalized desktop
  • Do not need offline access to virtual desktop
  • Do not need to save state between sessions
  • Need access to the virtual desktop from a collection of client devices

Examples of users that may require access to a virtual desktop pool are call-center workers, POS workers, and administrative office workers.

In contrast, a personal virtual desktop is configured for connection by a single user and maintains user state information when the user logs off. A personal virtual desktop pool is applicable to users with the following characteristics:

  • Need a personalized desktop
  • Need to save desktop state between sessions
  • Need administrative access to virtual desktop

Examples of users that may require access to a personal virtual desktop are offshore software developers and testers.

Core RDS VDI Components

Whether deploying a virtual desktop pool or personal virtual desktops using RDS-based VDI, specific core components are needed and additional components may be necessary depending on the complexity of the solution requirements. The core components required for any RDS-based VDI solution include the following:

  • RD Connection Broker
  • RD Session Host
  • RD Virtualization Host
  • RD Licensing Server

The RD Connection Broker builds on the functionality of the Windows Server 2008 TS Session Broker to manage not only session-based remote desktops, but also virtual desktops. The RD Virtualization Host (RDVH) is a new role added in Windows Server 2008 R2. The RD Virtualization Host role runs on Hyper-V hosts and serves to manage the state of virtual machines and connect users to virtual machines.

RD Connection Broker

The RD Connection Broker manages user requests for connection to session-based and virtual machine-based desktops. It tracks the user name for each connection as well as the connection identifier, connection state, and connection host for each connection to an RD Session Host or RD Virtualization Host server. The RD Connection Broker also provides the ability to load balance connections within RD Session Host and RD Virtualization Host server farms, by evenly distributing connections between RD Session Host and RD Virtualization Host servers using a relative server weight value. In addition, the RD Connection Broker enables disconnected users to reconnect to an existing session-based or virtual machine-based desktop.

RD Session Host

In a VDI environment, RD Session Host servers are configured to run in redirection mode, which disallows interactive user sessions. Instead, when a client requests a connection to a virtual desktop, the RD Session Host server role contacts the RD Connection Broker for the IP address of the virtual machine to which the user should connect and returns it to the client. The client then initiates an RDP connection to the virtual machine.

RD Virtualization Host

The RD Virtualization Host is installed on Hyper-V hosts to manage virtual machines in preparation for an RDP connection based on a request from the RD Connection Broker. In addition, the RD Virtualization Host monitors and reports on virtual machine guest sessions to the RD Connection Broker.

RD Licensing Server

If you have worked in a Microsoft Terminal Services environment, you are already familiar with Client Access Licenses (CALs). In Windows Server 2008 R2, an RDS CAL is required for each device or user that connects to RD Session Host or RD Virtualization Host servers. When a user or device attempts a connection through an RD Session Host server, it requires an RDS CAL. The RD Session Host server makes a request to an RD Licensing Server on behalf of the client. If an RDS CAL is available, it is issued to the client, which is then able to connect to the RD Session Host or RD Virtualization Host server.

Microsoft provides a grace period that begins when an RD Session Host server accepts the first client connection. However, after the grace period, each user or device must be issued an RDS CAL before it can connect to an RD Session Host or RD Virtualization Host server. In Windows Server 2008 R2, the grace period lasts 120 days.

If you already have Windows Server 2008 deployed and have Windows Server 2008 TS CALs, you do not need to purchase new CALs as both Windows Server 2008 TS CALs and Windows Server 2008 R2 RDS CALs allow a user or device the right to connect to Windows Server 2008 R2.

Other RDS VDI Components

There are several additional RDS components that may be needed to provide a complete VDI solution. These additional components include the following:

  • RD Web Access
  • RD Gateway
  • RemoteApp

RD Web Access

RD Web Access provides a web portal from which users can access session-based remote desktops, session-based remote applications, or virtual machine-based desktops. User requests for connection to session-based and virtual machine-based desktops made through RD Web Access are managed through the RD Connection Broker.

In Windows Server 2008 R2, RD Web Access allows the view of available remote desktops, remote applications, and virtual desktops to be customized based on user access. This means that a user will only be able to view and access those elements to which an administrator has specifically provided rights and permissions.

The RD Web Access portal also supports both private and public computer modes to control storage of sensitive information. For example, in private mode, RD Web Access cookies storing a user name expire in 4 hours. In public mode, they expire in 20 minutes.

RD Gateway

The RD Gateway is deployed at the edge of a corporate network and allows remote users to connect to resources deployed within a corporate intranet through a secure, encrypted connection. RD Gateway uses RDP over HTTPS to establish the secure connection between the remote user device and internal corporate resources.

RemoteApp

RemoteApp enables applications hosted on an RD Session Host server and virtual desktops hosted on an RD Virtualization Host server to be remotely accessed and integrated with a client desktop. For examples, using RemoteApp, it is possible to launch a remotely hosted application from an icon on the client desktop. When the application launches, an RDP session initiates to the application host and the application is presented on the local desktop in its own resizable window. If a user runs multiple RemoteApp applications, the applications can also share a single RDP session.

Remote Client Connection Sequence in Windows Server 2008 R2 RDS VDI

So how do RDS components work together to provide a remote client access to a virtual desktop within an Active Directory domain? Here is a brief explanation of the process assuming that RD Gateway and RD Web Access are deployed behind an external firewall and used to connect to a virtual desktop:

  1. A remote user opens a web browser and connects to the RD Web Access portal.
  2. RD Web Access requests the list of virtual desktop information to display from the RD Connection Broker.
  3. The RD Connection Broker checks virtual desktop permissions in Active Directory, and then provides the information back to the RD Web Access site.
  4. The users’ browser displays the virtual desktop information from the RD Web Access site.
  5. After the user selects a virtual desktop, the Remote Desktop Client (RDC) opens a connection to the RD Gateway.
  6. The RD Gateway forwards the connection to the RD Session Host server running in redirection mode.
  7. The RD Session Host server requests that the RD Connection Broker prepare the virtual desktop and return its IP address.
  8. The RD Connection Broker queries AD to verify user credentials and personal virtual desktop information associated with the user, if necessary.
  9. The RD Connection Broker requests that the RD Virtualization Host server start the virtual desktop if it is not running, and then returns the virtual desktop IP address to the RD Session Host server.
  10. The RD Session Host server returns the virtual desktop IP address to the Remote Desktop Client running on the users’ computer.
  11. The Remote Desktop Client connects to the virtual desktop through the RD Gateway.

Since there are many different network configurations in which the RD Gateway and other RDS VDI components can be deployed, the connection sequence may slightly vary. For additional information on RD Gateway deployment configurations, you can check out the Microsoft RDS Team blog entry here.

Conclusion

In Windows Server 2008 R2, the RD Connection Broker has been extended to manage not only session-based remote desktops, but also virtual machine-based desktops running on Hyper-V. With this new feature and the addition of the RD Virtualization Host component, it is now possible to build a VDI for small or medium environments using only Windows Server 2008 R2 RDS components. For large deployments, a Microsoft RDS-based VDI solution can still be integrated with Citrix XenDesktop to provide an enterprise scale infrastructure.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Implementing VDI using Windows Server 2008 R2 Remote Desktop Services and Hyper-V

Posted by Alin D on August 24, 2010

RDS VDI Functionality

An RDS-based VDI provides an environment that enables central storage, execution, and management of Windows desktops in a datacenter. An RDS VDI solution supports the creation and management of virtual desktop pools, and personal virtual desktops.

A virtual desktop pool is a collection of identical virtual desktops that are available for connection by multiple users. This type of virtual desktop does not maintain user state; instead it reverts back to its original state when a user logs off. A virtual desktop pool is applicable to users with the following characteristics:

  • Do not need a personalized desktop
  • Do not need offline access to virtual desktop
  • Do not need to save state between sessions
  • Need access to the virtual desktop from a collection of client devices

Examples of users that may require access to a virtual desktop pool are call-center workers, POS workers, and administrative office workers.

In contrast, a personal virtual desktop is configured for connection by a single user and maintains user state information when the user logs off. A personal virtual desktop pool is applicable to users with the following characteristics:

  • Need a personalized desktop
  • Need to save desktop state between sessions
  • Need administrative access to virtual desktop

Examples of users that may require access to a personal virtual desktop are offshore software developers and testers.

Core RDS VDI Components

Whether deploying a virtual desktop pool or personal virtual desktops using RDS-based VDI, specific core components are needed and additional components may be necessary depending on the complexity of the solution requirements. The core components required for any RDS-based VDI solution include the following:

  • RD Connection Broker
  • RD Session Host
  • RD Virtualization Host
  • RD Licensing Server

The RD Connection Broker builds on the functionality of the Windows Server 2008 TS Session Broker to manage not only session-based remote desktops, but also virtual desktops. The RD Virtualization Host (RDVH) is a new role added in Windows Server 2008 R2. The RD Virtualization Host role runs on Hyper-V hosts and serves to manage the state of virtual machines and connect users to virtual machines.

RD Connection Broker

The RD Connection Broker manages user requests for connection to session-based and virtual machine-based desktops. It tracks the user name for each connection as well as the connection identifier, connection state, and connection host for each connection to an RD Session Host or RD Virtualization Host server. The RD Connection Broker also provides the ability to load balance connections within RD Session Host and RD Virtualization Host server farms, by evenly distributing connections between RD Session Host and RD Virtualization Host servers using a relative server weight value. In addition, the RD Connection Broker enables disconnected users to reconnect to an existing session-based or virtual machine-based desktop.

RD Session Host

In a VDI environment, RD Session Host servers are configured to run in redirection mode, which disallows interactive user sessions. Instead, when a client requests a connection to a virtual desktop, the RD Session Host server role contacts the RD Connection Broker for the IP address of the virtual machine to which the user should connect and returns it to the client. The client then initiates an RDP connection to the virtual machine.

RD Virtualization Host

The RD Virtualization Host is installed on Hyper-V hosts to manage virtual machines in preparation for an RDP connection based on a request from the RD Connection Broker. In addition, the RD Virtualization Host monitors and reports on virtual machine guest sessions to the RD Connection Broker.

RD Licensing Server

If you have worked in a Microsoft Terminal Services environment, you are already familiar with Client Access Licenses (CALs). In Windows Server 2008 R2, an RDS CAL is required for each device or user that connects to RD Session Host or RD Virtualization Host servers. When a user or device attempts a connection through an RD Session Host server, it requires an RDS CAL. The RD Session Host server makes a request to an RD Licensing Server on behalf of the client. If an RDS CAL is available, it is issued to the client, which is then able to connect to the RD Session Host or RD Virtualization Host server.

Microsoft provides a grace period that begins when an RD Session Host server accepts the first client connection. However, after the grace period, each user or device must be issued an RDS CAL before it can connect to an RD Session Host or RD Virtualization Host server. In Windows Server 2008 R2, the grace period lasts 120 days.

If you already have Windows Server 2008 deployed and have Windows Server 2008 TS CALs, you do not need to purchase new CALs as both Windows Server 2008 TS CALs and Windows Server 2008 R2 RDS CALs allow a user or device the right to connect to Windows Server 2008 R2.

Other RDS VDI Components

There are several additional RDS components that may be needed to provide a complete VDI solution. These additional components include the following:

  • RD Web Access
  • RD Gateway
  • RemoteApp

RD Web Access

RD Web Access provides a web portal from which users can access session-based remote desktops, session-based remote applications, or virtual machine-based desktops. User requests for connection to session-based and virtual machine-based desktops made through RD Web Access are managed through the RD Connection Broker.

In Windows Server 2008 R2, RD Web Access allows the view of available remote desktops, remote applications, and virtual desktops to be customized based on user access. This means that a user will only be able to view and access those elements to which an administrator has specifically provided rights and permissions.

The RD Web Access portal also supports both private and public computer modes to control storage of sensitive information. For example, in private mode, RD Web Access cookies storing a user name expire in 4 hours. In public mode, they expire in 20 minutes.

RD Gateway

The RD Gateway is deployed at the edge of a corporate network and allows remote users to connect to resources deployed within a corporate intranet through a secure, encrypted connection. RD Gateway uses RDP over HTTPS to establish the secure connection between the remote user device and internal corporate resources.

RemoteApp

RemoteApp enables applications hosted on an RD Session Host server and virtual desktops hosted on an RD Virtualization Host server to be remotely accessed and integrated with a client desktop. For examples, using RemoteApp, it is possible to launch a remotely hosted application from an icon on the client desktop. When the application launches, an RDP session initiates to the application host and the application is presented on the local desktop in its own resizable window. If a user runs multiple RemoteApp applications, the applications can also share a single RDP session.

Remote Client Connection Sequence in Windows Server 2008 R2 RDS VDI

So how do RDS components work together to provide a remote client access to a virtual desktop within an Active Directory domain? Here is a brief explanation of the process assuming that RD Gateway and RD Web Access are deployed behind an external firewall and used to connect to a virtual desktop:

  1. A remote user opens a web browser and connects to the RD Web Access portal.
  2. RD Web Access requests the list of virtual desktop information to display from the RD Connection Broker.
  3. The RD Connection Broker checks virtual desktop permissions in Active Directory, and then provides the information back to the RD Web Access site.
  4. The users’ browser displays the virtual desktop information from the RD Web Access site.
  5. After the user selects a virtual desktop, the Remote Desktop Client (RDC) opens a connection to the RD Gateway.
  6. The RD Gateway forwards the connection to the RD Session Host server running in redirection mode.
  7. The RD Session Host server requests that the RD Connection Broker prepare the virtual desktop and return its IP address.
  8. The RD Connection Broker queries AD to verify user credentials and personal virtual desktop information associated with the user, if necessary.
  9. The RD Connection Broker requests that the RD Virtualization Host server start the virtual desktop if it is not running, and then returns the virtual desktop IP address to the RD Session Host server.
  10. The RD Session Host server returns the virtual desktop IP address to the Remote Desktop Client running on the users’ computer.
  11. The Remote Desktop Client connects to the virtual desktop through the RD Gateway.

Since there are many different network configurations in which the RD Gateway and other RDS VDI components can be deployed, the connection sequence may slightly vary. For additional information on RD Gateway deployment configurations, you can check out the Microsoft RDS Team blog entry here.

Conclusion

In Windows Server 2008 R2, the RD Connection Broker has been extended to manage not only session-based remote desktops, but also virtual machine-based desktops running on Hyper-V. With this new feature and the addition of the RD Virtualization Host component, it is now possible to build a VDI for small or medium environments using only Windows Server 2008 R2 RDS components. For large deployments, a Microsoft RDS-based VDI solution can still be integrated with Citrix XenDesktop to provide an enterprise scale infrastructure.

<input type=”hidden” name=”IL_RELATED_TAGS” value=”1″/>

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »