Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘network address translation’

SQL Azure Services – A full overview

Posted by Alin D on May 11, 2011

SQL Azure is a database service in the cloud on Microsoft’s Windows Azure platform well-suited for web facing database applications as well as a relational database in the cloud.

The present version mostly deals with the component analogous to a database engine in a local, on-site SQL Server. Future enhancements will host the other services such as Integration Services, Reporting Services, Service Broker, and any other yet-to-be defined services. Although these services are not hosted in the cloud, they can leverage data on SQL Azure to provide support. SQL Server Integration Services can be used to a great advantage with SQL Azure for data movement, and highly interactive boardroom quality reports can be generated using SQL Azure as a backend server.

Infrastructure features

SQL Azure is designed for peak workloads by failover clustering, load balancing, replication, and  scaling out, which are all automatically managed at the data center. SQL Azure’s infrastructure architecture is fashioned to implement all of these features.

High availability is made possible by replicating multiple redundant copies to multiple physical servers thus, ensuring the business process can continue without interruption. At least three replicas are created; a replica can replace an active copy facing any kind of fault condition so that service is assured. At present, the replicated copies are all on the same data center, but in the future, geo-replication of data may become available so that performance for global enterprises may be  improved. Hardware failures are addressed by automatic failover.

Enterprise data centers addressed the scaling out data storage needs, but incurred administrative overheads in maintaining the on-site SQL Servers. SQL Azure offers the same or even better functionality without incurring administrative costs.

How different is SQL Azure from SQL Server?

SQL Azure (version 10.25) may be viewed as a subset of an on-site SQL Server 2008 (version 10.5) both exposing Tabular Data Stream (TDS) for data access using T-SQL. As a subset, SQL Azure supports only some of the features of SQL Server and the T-SQL feature set. However, more of T-SQL features are being added in the continuous upgrades from SU1 to SU5. Since it is hosted on computers in the Microsoft Data Centers, its administration—in some aspects—is different from that of an on-site SQL Server.

SQL Azure is administered as a service, unlike on-site servers. The SQL Azure server is not a SQL Server instance and is therefore administered as a logical server rather than as a physical server. The database objects such as tables, views, users, and so on are administered by SQL Azure database administrator but the physical side of it is administered by Microsoft on its data centers. This abstraction of infrastructure away from the user confers most of its availability, elasticity, price, and extensibility features. To get started with SQL Azure, you must provision a SQL Azure Server on Windows Azure platform as explained in the After accessing the portal subsection, later in the article.

SQL Azure provisioning

Provisioning a SQL Azure Server at the portal is done by a mere click of the mouse and will be ready in a few minutes. You may provision the storage that you need, and when the need changes, you can add or remove storage. This is an extremely attractive feature especially for those whose needs start with low storage requirements and grow with time. It is also attractive to those who may experience increased load at certain times only.

SQL Azure databases lie within the operational boundary of the customer-defined SQL Azure Server; it is a container of logical groupings of databases enclosed in a security firewall fence. While the databases are accessible to the user, the files that store the relational data are not; they are managed by the SQL Azure services.

A single SQL Azure Server that you get when you subscribe, can house a large number (150) of databases, presently limited to the 1 GB and 10 GB types within the scope of the licensing arrangement.

• What if you provision for 1 GB and you exceed this limit?

Then either you provision a server with a 10 GB database or get one more 1 GB database. This means that there is a bit of due diligence you need to do before you start your project.

• What if the data exceeds 10 GB?

The recommendation is to partition the data into smaller databases. You may have to redesign your queries to address the changed schema as cross-data­base queries are not supported. The rationale for using smaller databases and partitioning, lies in its agility to quickly recover from failures (high availabil­ity/fault tolerance) with the ability to replicate faster while addressing the issue of covering a majority of users (small business and web facing). How­ever, responding to the requests of the users, Microsoft may provide 50 GB databases in the future (the new update in June 2010 to SQL Azure Services will allow 50 GB databases).

• How many numbers of SQL Azure Servers can you have?

You can have any number of SQL Azure Servers (that you can afford) and place them in any geolocation you choose. It is strictly one server for one subscription. Presently there are six geolocated data centers that can be chosen. The number of data centers is likely to grow. Best practices dictate that you keep your data nearest to where you use it most, so that performance is optimized. The SQL Azure databases, being relational in nature, can be programmed using T-SQL skills that are used in working with on-site SQL Servers. It must be remembered though, that the SQL Azure Servers are not physical servers but are virtual objects. Hiding their physical whereabouts but providing adequate hooks to them, helps you to focus more on the design and less on being concerned with files, folders, and hardware problems. While the server-related information is shielded from the user, the databases themselves are containers of objects similar to what one finds in on-site SQL Servers such as tables, views, stored procedures, and so on. These database objects are accessible to logged on users who have permission.

After accessing the portal

To get started with SQL Azure Services, you will need to get a Windows Azure platform account, which gives access to the three services presently offered. The first step is to get a Windows Live ID and then establish an account at Microsoft’s Customer Portal. In this article, you will be provisioning a SQL Azure Server after accessing the SQL Azure Portal.

Server-level administration

Once you are in the portal, you will be able to create your server for which you can provide a username and password. You will also be able to drop the server and change the password. You can also designate in which of the data centers you want your server to be located. With the credentials created in the portal, you will become the server-level principal; the equivalent of sa of your server. In the portal, you can also create databases and firewall fences that will only allow users from the location(s) you specify here. The user databases that you create here are in addition to the master database that is created by SQL Azure Services; a repository of information about other databases. The master database also keeps track of logins and their permissions. You could get this information by querying the master for sys.sql_logins and sys.database views.

If you are planning to create applications, you may also copy the connection strings that you would need for your applications, which are available in the portal. You would be typically using the Visual Studio IDE to create applications. However, SQL Azure can be used standalone without having to use the Windows Azure service. Indeed some users may just move their data to SQL Azure for archive.

Once you have provisioned a server, you are ready to create other objects that are needed besides creating the databases. At the portal, you can create a database and set up a firewall fence, but you will need another tool to create other objects in the database.

Setting up firewall rules

Users accessing SQL Azure Server in the Cloud need to go through two kinds of barriers. Firstly, you need to go through your computer’s firewall and then go in through the firewall that protects your SQL Azure Server. The firewall rules that you set up in the portal allow only users from the location you set up for the rule, because the firewall rules only look at the originating IP address.

By default, there are no firewall rules to start with and no one gets admitted. Firewall rules are first configured in the portal. If your computer is behind a Network Address Translation (NAT) then your IP address will be different from what you see in your configuration settings. However, the user interface in the portal for creating a firewall discovers and displays the correct IP address most of the time.

A workaround is suggested here for those cases in which your firewall UI incorrectly displays your IP Address: http://hodentek.blogspot.com/2010/01/firewall-ip-address-setting-in-sql.html.

Firewalls can also be managed from a tool such as SSMS using extended stored procedures in SQL Azure. They can be managed programmatically as well from Visual Studio.

In order for you to connect to SQL Azure, you also need to open your computer’s firewall, so that an outgoing TCP connection is allowed through port 1433 by creating an exception. You can configure this in your computer’s Control Panel. If you have set up some security program, such as Norton Security, you need to open this port for outgoing TCP connections in the Norton Security Suite’s UI.

In addition, your on-site programs accessing SQL Azure Server and your hosted applications on Windows Azure may also need access to SQL Azure. For this scenario, you should check the checkbox Allow Microsoft Services access to this server in the firewall settings page.

The firewall rule only checks for an originating IP address but you need to be authenticated to access SQL Azure. Your administrator, in this case the server-level principal, will have to set you up as a user and provide you with appropriate credentials.

Administering at the database level

SQL Azure database administration is best done from SSMS. You connect to the Database Engine in SSMS, which displays a user interface where you enter the credentials that you established in the portal. You also have other options to connect to SQL Azure (Chapter 3, Working with SQL Azure Databases from Visual Studio 2010 and Chapter 4, SQL Azure Tools). In SSMS, you have the option to connect to either of the databases, the system-created master or the database(s) that you create in the portal. The Object Explorer displays the server with all objects that are contained in the chosen database. What is displayed in the Object Explorer is contextual and the use of the USE statement to change the database context does not work. Make sure you understand this, whether you are working with Object Explorer or query windows. The server-level administrator is the ‘top’ administrator and he or she can create other users and assign them to different roles just like in the on-site SQL Server. The one thing that an administrator cannot do is undertake any activity that would require access to the hardware or the file system.

Role of SQL Azure database administrator

The SQL Azure database administrator administers and manages schema generation, statistics management, index tuning, query optimization, as well as security (users, logins, roles, and so on). Since the physical file system cannot be accessed by the user, tasks such as backing up and restoring databases are not possible. Looking at questions and concerns raised by users in forums, this appears to be one of the less appealing features of SQL Azure that has often resulted in remarks that ‘it is not enterprise ready’. Users want to keep a copy of the data, and if it is a very large database, the advantages of not having servers on the site disappear as you do need a server on-site to back up the data. One suggested recommendation by Microsoft is to use SQL Server Integration Services and bulk copying of data using the SQLCMD utility.

SQL Azure databases

These databases are no different from those of on-site SQL Server 2008 except that the user database node may not have all the nodes of a typical user database that you find in the on-site server. The nodes Database Diagrams, Service Broker, and Storage will be absent as these are not supported. In the case of the system database node, only the master will be present. The master in SQL Azure is a database that contains all information about the other databases.

You can only access the SQL Server with SQL Server Authentication, whereas you have an additional option, Windows Authentication in the case of an on-site SQL Server. All the allowed DDL, DML operations can be programmed using templates available in SSMS. Some of the more common ones, as well as access to the template explorer, which provides a more complete list, are detailed later in the chapter.

User administration and logins

Security is a very important aspect of database administration and it is all the more important in the case of the multi-tenant model used in hosting SQL Azure to control access.

The server-level administrator created in the portal is the top level administrator of SQL Azure Server. While he  can create other databases in the portal, he will have to create other database objects including users and their login, using the SSMS.

Server-level administration

The master database is used to perform server-level administration, as the master database keeps records of all logins and of the logins that have permission to create a database. You must first establish a connection to the master database while creating a New Query to carry out tasks to CREATE, ALTER, or DROP LOGINS or DATABASES. The server-related views: sys.sql_logins and sys.databases can be used to review logins and databases. Whenever you want to change the context of a database, you have to login to the database using the Options in the SSMSs UI, Connect to Server.

Creating a database using T-SQL is extremely simple as there are no file references to be specified and certain other features that are not implemented. The following syntax is for creating a database in an on-site SQL Server instance:

CREATE DATABASE database_name

[ON

[ PRIMARY ] [ <filespec> [ ,…n ]

[ , <filegroup> [ ,…n ] ]

[ LOG ON { <filespec> [ ,…n ] } ]

]

[ COLLATE collation_name ]

[ WITH <external_access_option> ]

]

[;]

To attach a database

CREATE DATABASE database_name

ON <filespec> [ ,…n ]

FOR { ATTACH [ WITH <service_broker_option> ]

| ATTACH_REBUILD_LOG }

[;]

<filespec> ::=

{

(

NAME = logical_file_name ,

FILENAME = { ‘os_file_name’ | ‘filestream_path’ }

[ , SIZE = size [ KB | MB | GB | TB ] ]

[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]

[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB | % ] ]

) [ ,…n ]

}

<filegroup> ::=

{

FILEGROUP filegroup_name [ CONTAINS FILESTREAM ] [ DEFAULT ]

<filespec> [ ,…n ]

}

<external_access_option> ::=

{

[ DB_CHAINING { ON | OFF } ]

[ , TRUSTWORTHY { ON | OFF } ]

}

<service_broker_option> ::=

{

ENABLE_BROKER

| NEW_BROKER

| ERROR_BROKER_CONVERSATIONS

}

Create a database snapshot

CREATE DATABASE database_snapshot_name

ON

(

NAME = logical_file_name,

FILENAME = ‘os_file_name’

) [ ,…n ]

AS SNAPSHOT OF source_database_name

[;]

Look how simple the following syntax is for creating a database in SQL Azure:

CREATE DATABASE database_name

[(MAXSIZE = {1 | 10} GB )]

[;]

However, certain default values are set for the databases, which can be reviewed by issuing the query after creating the database:

SELECT * from sys.databases

Managing logins

After logging in as a server-level administrator to master, you can manage logins using CREATE LOGIN, ALTER LOGIN, and DROP LOGIN statements. You can create a password by executing the following statement for example, while connected to master:

CREATE LOGIN xfiles WITH PASSWORD = ‘@#$jAyRa1’

You need to create a password before you proceed further. During authentication, you will normally be using Login Name and Password, but due to the fact that some tools implement TDS differently, you may have to append the servername part of the fully qualified server name <servername>.<database name>.<windows>.<net> to the Username like in login_name@<servername>. Note that both <login_name> and <login_name>@<servername> are valid in the Connect to Server UI of SSMS.

Connecting to SQL Azure using new login

After creating a new login as described here, you must confer database-level permissions to the new login to get connected to SQL Azure. You can do so by creating users for the database with the login.

Logins with server-level permissions

The roles loginmanager and dbmanager are two security-related roles in SQL Azure to which users may be assigned, that allows them to create logins or create databases. Only the server-level principal (created in the portal) or users with loginmanager role can create logins. The dbmanager role is similar to the dbcreator role and users in this role can create databases using the CREATE DATABASE statement while connected to the master database.

These role assignments are made using the stored procedure sp_addrolemember as shown here for users, user1 and user2. These users are created while connected to master using, for example:

CREATE USER User1 FROM LOGIN ‘login1’;

CREATE USER User2 FROM LOGIN ‘login1’;

EXEC sp_addrolemember ‘dbmanager’, ‘User1’;

EXEC sp_addrolemember ‘loginmanager’, ‘User2’;

Migrating databases to SQL Azure

As most web applications are data-centric, SQL Azure’s databases need to be populated with data before the applications can access the data. More often, if you are trying to push all of your data to SQL Azure, you need tools. You have several options, such as using scripts, migration wizard, bulk copy (bcp.exe), SQL Server Integration Services, and so on. More recently (April 19, 2010 update) Data-tier applications were implemented for SQL Azure providing yet another option for migrating databases using both SSMS as well as Visual Studio.

Troubleshooting

There may be any number of reasons why interacting with SQL Azure may not always be successful. For example, there may just be a possibility that the service level agreement that assures 99.99 percent may not actually be possible, there may be a problem of time-out that is set for executing a command, and so on. In these cases, troubleshooting to find out what might have happened becomes important. Herein, we will see some of the cases that prevent interacting with SQL Azure and the ways and means of troubleshooting the causes.

• Login failure is one of the common problems that one faces in connecting to SQL Azure. In order to successfully login:

°°You need to make sure that you are using the correct SSMS.

°°Make sure you are using SQL Server Authentication in the Connect to Server dialog box.

°°You must make sure your login name and password (type in exactly as you were given by your administrator) are correct. Password is case sensitive. Sometimes you may need to append server name to login name.

°°If you cannot browse the databases, you can type in the name and try.

If your login is not successful, either there is a problem in the login or the database is not available.

If you are a server-level administrator you can reset the password in the portal. For other users the administrator or loginmanager can correct the logins.

• Service unavailable or does not exist.

If you have already provisioned a server, check the following link: http:// http://www.microsoft.com/windowsazure/support/status/servicedashboard. aspx, to make sure SQL Azure Services are running without problem at the data center.

Use the same techniques that you would use in the case of SQL Server 2008 with network commands like Ping, Tracert, and so on. Use the fully qualified name of the SQL Azure Server you have provisioned while using these utilities.

• You assume you are connected, but maybe you are disconnected.

You may be in a disconnected state for a number of reasons, such as:

°°When a connection is idle for an extended period of time

°°When a connection consumes an excessive amount of resources or holds onto a transaction for an extended period of time

°°If the server is too busy

Try reconnecting again. Note that SQL Azure error messages are a subset of SQL error messages.

T-SQL support in SQL Azure

Transact-SQL is used to administer SQL Azure. You can create and manage objects as you will see later in this chapter. CRUD (create, read, update, delete) operations on the table are supported. Applications can insert, retrieve, modify, and delete data by interacting with SQL Azure using T-SQL statements.

As a subset of SQL Server 2008, SQL Azure supports only a subset of T-SQL that you find in SQL Server 2008.

The supported and partially supported features from Microsoft documentation are reproduced here for easy reference.

The support for Transact-SQL reference in SQL Azure can be described in three main categories:

• Transact-SQL language elements that are supported as is

• Transact-SQL language elements that are not supported

• Transact-SQL language elements that provide a subset of the arguments and options in their corresponding Transact-SQL elements in SQL Server 2008

The following Transact-SQL features are supported or partially supported by SQL Azure:

• Constants

• Constraints

• Cursors

• Index management and rebuilding indexes

• Local temporary tables

• Reserved keywords

• Stored procedures

• Statistics management

• Transactions

• Triggers

• Tables, joins, and table variables

• Transact-SQL language elements

• Create/drop databases

• Create/alter/drop tables

• Create/alter/drop users and logins

• User-defined functions

• Views

The following Transact-SQL features are not supported by SQL Azure:

• Common Language Runtime (CLR)

• Database file placement

• Database mirroring

• Distributed queries

• Distributed transactions

• Filegroup management

• Global temporary tables

• Spatial data and indexes

• SQL Server configuration options

• SQL Server Service Broker

• System tables

• Trace flags

T-SQL grammar details are found here: http://msdn.microsoft.com/en-us/ library/ee336281.aspx.

 

Posted in Azure | Tagged: , , , , , , | 1 Comment »

How to Import PST into Exchange 2010 with Powershell

Posted by Alin D on February 17, 2011

The process of importing multiple users’ PST files into Exchange 2010 is not as simple as perhaps you might expect, and certainly not as simple is probably should be, given how common this particular task is. To try and spread the knowledge about wrestle with this task, this article is aimed at SysAdmins wanting to migrate their users’ personal PST files into their (the users’) main exchange mailboxes. Even better, to make this as easy as possible, I’ll walk through the entire process involved, as well as creating the appropriate PowerShell scripts to semi-automate the process. Finally, to keep everything clear, I’ve split the material into three parts:

  • Importing a single PST File into Exchange 2010 RTM and SP1
  • Finding PST files on your network, and then…
  • …Importing these into Exchange (i.e. not one-by-one!)

While my solution is not necessarily Best Practice, it’s one of the best solutions I could research, and so it’s likely that many SysAdmins will come up with something similar. What you should bear in mind is that these three components I’ve described are not actually the relative “Easiest Way” of handling this importing process, as they require a non-trivial amount of tweaking, and come with their fair share of pitfalls and gotchas.

Introduction:

Just so we’re all on the same page, this guide has focused on using Windows Management Instrumentation (WMI) to identify files on remote user’s machines, and then import these files into their mailboxes. There are of course several options:

  1. You could ask users to manually drag mails across from within outlook.
  2. You could have group policy enforce a logon script that copies user PST files to a shared network drive and then removes said files from their system (or prevents outlook from mounting it).
    A script running on a server can then poll for new PSTs in this folder and automatically add them with the –IsArchive parameter so that the contents of the users’ local archive PSTs are available in their archive mailboxes (almost) immediately. The advantages of this approach are that you don’t need to worry about locked files (as the files can be copied before outlook has had a chance to start/lock them), or enabling WMI firewall access on client machines. However, it does still require that the user log off and on…
  3. The third (and, I think, easiest) approach is to use WMI to remotely search for files. This can generate a list of PSTs on all machines, and highlight the machines which couldn’t be searched (and which would require further attention). However, it’s highly likely that Outlook will be running on your users’ machines, making this process trickier. Naturally, WMI can be used to terminate the Outlook process remotely, but this is not ideal, and there are other ways around this problem. The advantages of this approach is that it does not require individual users to login and out (useful if a user is on holiday, for instance) – merely that the machine is on (which could be managed via Wake On LAN).

As stated, this guide focuses on the WMI-based solution and just covers the basics – more advanced scripts could be created to deal with a greater variety of error cases and configurations (e.g. shutting down outlook, making separate lists of machines to try again, detailing which PSTs had passwords and could not be imported).

How to import a PST file into Exchange.

Importing a PST file into Exchange 2010 requires the use of the Exchange Management Shell (EMS), although (somewhat confusingly) this functionality was originally included in the Exchange Management Console in the Beta release of Exchange 2010.

A PowerShell cmdlet in the EMS is used to perform the action, and the use of this cmdlet requires that the current user has Import Export roles enabled on their profile. In order to run the import and export cmdlets, Exchange 2010 RTM also requires that Outlook 2010 x64 is installed on the machine being used to run said cmdlets, although this is no longer a requirement of Exchange 2010 SP1.

So, to import a single PST file into an exchange 2010 mailbox:

  1. Install Outlook 2010 x64 on the Exchange server. Bear in mind that, by default, on a machine with no pre-existing Office installation, the DVD’s autorun setup will try to install the x86 applications. Be sure to manually run the installer rom the x64 directory to install the x64 version of Outlook, although this step is not necessary for Exchange 2010 SP1, as this now (re)includes a MAPI provider.
  2. Enable Import Permissions for a security group which your user belongs to – In this case, ‘Mailbox Support’ – with the following command:

    New-ManagementRoleAssignment -Name "Import Export Mailbox Admins" `

    -SecurityGroup "Mailbox Support" `

    -Role "Mailbox Import Export"

  3. Import the desired PST file into the appropriate user’s mailbox with the following command:

    Import-Mailbox -PSTFolderPath pstfilepath -Identity exchangealias

Exchange 2010 SP1 differs a little, in that you don’t need to install Outlook 2010 x64, and rather than the synchronous Import-Mailbox cmdlet, the asynchronous New-MailBoxImportRequest can be used, which takes the form of:

New-MailboxImportRequest -FilePath pstfilepath -Mailbox mailbox

The status of current requests can be viewed with the Get-MailboxImportRequest cmdlet, and completed requests can be cleared (or pending/In-progress requests cancelled) with the Remove-MailboxImportRequest cmdlet. One of the advantages of this new cmdlet, other than it being asynchronous, is that you can specify an additional –IsArchive parameter, which will import the PST directly into the users archive mailbox.

I did experience a few problems using these cmdlets during the brief time I spent doing research for this guide. The Exchange 2010 RTM Import-Mailbox on one system simply refused to play nicely, and kept throwing the following error:

Error:
Error was found for <Mailbox Name> because: Error occurred in the step: Approving object. An unknown error
has occurred., error code: -2147221219
    + CategoryInfo          : InvalidOperation: (0:Int32) [Import-Mailbox], RecipientTaskException
    + FullyQualifiedErrorId : CFFD629B,Microsoft.Exchange.Management.RecipientTasks.ImportMailbox

Not a lot of help in itself, although a little Googling and experimentation revealed four main potential causes for this error:

  1. The appropriate role permissions has not been added to the users security profile.
  2. The version of MAPI being used has somehow got confused, which could be fixed by running the fixmapicommand from the command prompt.
  3. The PST file is password protected.
  4. There is a bug in exchange.

In my case, I’d unfortunately hit the forth problem, and the workaround proved to be pretty horrific – it may simply be worth waiting for a fix from Microsoft. To complete my task, I had to temporarily add a new domain controller to my network to host a new Exchange 2010 server. I then moved the target mailboxes (i.e. the ones for which I had PSTs to import) across to this new server, performed the import, and then moved the mailboxes back to their original Exchange server and removed the temporary server from the network (Like I said, pretty horrific).

Upgrading the system to 2010 SP1 and using the New-MailBoxImportRequest cmdlet on the same system yielded the following error:

Couldn’t connect to the target mailbox.

+ CategoryInfo : NotSpecified: (0:Int32) [New-MailboxImportRequest], RemoteTransientException

    + FullyQualifiedErrorId : 1B0DDEBA,Microsoft.Exchange.Management.RecipientTasks.NewMailboxImportRequest

Again, this appears to be a known issue, and apparently one which is scheduled to be fixed before the final release of SP1.

Finding PST files on the network

So, we’ve seen the process for importing a local PST file into Exchange server, however, in reality, it’s likely that these PST files are scattered liberally around your network on the hard-drives of your users’ machines as a result of Outlook’s new personal archiving functionality. Ideally, so that this mass-import process is transparent to your users, you’d like some way of finding all of these PST files, pairing them up with their users, and then simply importing them into the appropriate mailbox.

There are a few steps required to set up something like that. First, we can query Active Directory for a list of all the machines attached to your domain. We can then use WMI to search each of these machines for PST files, and the file paths for these PSTs should hopefully give us a clue as to which user they belong to (by default, they will be created in a directory path containing the username.) We can also grab the file owner file attribute, which should correlate with the details in the file path.

Naturally, this technique requires that all of the machines in your network are switched on and accessible by WMI, although a list of the machines which could not be queried can be provided as an output.

Notes about WMI

By default, WMI is blocked by the windows firewall in Windows 7 and 2008 R2, so you’ll probably need to open up the ports on all of your users’ machines. This can be done with the netsh command, or through a change to group policy.

You might quite rightly be asking yourself “What are the implications of this?” WMI is a powerful beast which allows remote access to many aspects of a user’s machine. As such, it could be considered a significant security vulnerability. In addition, it’s typically accessed though port 135, which not only permits access to WMI, but also to any other DCOM components which may be installed on a machine, thus opening the way for exploitation by Trojans and the like. Needless to say, the ports are blocked by default for a reason, so carefully consider all of the implications when opening them.

WMI will also not help you if the machines you wish to tinker with are subject to NAT (Network Address Translation) – You’ll simply be unable to reach these machines.

Nevertheless, let’s assume a situation without any NAT, and where the security risks have been minimised. The following script generates a txt file (the filename defined on line 2) of all the computers on your domain to be searched. This file can then be manually edited with notepad to remove any machines you don’t wish to search:

$strCategory = "computer"

$strOutput = "c:computernames.txt"

$objDomain = New-Object System.DirectoryServices.DirectoryEntry

$objSearcher = New-Object System.DirectoryServices.DirectorySearcher

$objSearcher.SearchRoot = $objDomain

$objSearcher.Filter = ("(objectCategory=$strCategory)")

$colProplist = "name"

foreach ($i in $colPropList){$objSearcher.PropertiesToLoad.Add($i)}

$colResults = $objSearcher.FindAll()

[bool]$firstOutput = $true

foreach ($objResult in $colResults)

{

$objComputer = $objResult.Properties;

if($firstOutput)

{

Write-output $objComputer.name | Out-File -filepath $strOutput

$firstOutput = $false;

}

else

{

Write-output $objComputer.name | Out-File -filepath $strOutput `

-append

}

}

Listing 1 – A PowerShell script to generate a list of all machines on your domain which are to be searched for PST files.

The next script will generate a CSV (Comma separated values) file detailing the network paths of the PST files you need to import:

$strComputers = Get-Content -Path "c:computernames.txt"

[bool]$firstOutput = $true

foreach($strComputer in $strComputers)

{

$colFiles = Get-Wmiobject -namespace "rootCIMV2" `

-computername $strComputer `

-Query "Select * from CIM_DataFile `

Where Extension = ‘pst’"

foreach ($objFile in $colFiles)

{

if($objFile.FileName -ne $null)

{

$filepath = $objFile.Drive + $objFile.Path + $objFile.FileName + "." `

+ $objFile.Extension;

$query = "ASSOCIATORS OF {Win32_LogicalFileSecuritySetting=’" `

+ $filepath `

+ "’} WHERE AssocClass=Win32_LogicalFileOwner ResultRole=Owner"

$colOwners = Get-Wmiobject -namespace "rootCIMV2" `

-computername $strComputer `

-Query $query

$objOwner = $ colOwners[0]

$user = $objOwner.ReferencedDomainName + "" + $objOwner.AccountName

$output = $strComputer + "," + $filepath + "," + $user

if($firstOutput)

{

Write-output $output | Out-File -filepath c:pstdetails.csv

$firstOutput = $false

}

else

{

Write-output $output | Out-File -filepath c:pstdetails.csv -append

}

}

}

}

Listing 2 – A PowerShell script to find and list network paths for PST files to be imported.

This script will take as input a text file containing a list of machine names, which is, conveniently, the output of the first script. It will then generate a .csv file of all the PST files found on those machines, and the owners associated with them. So far, so painless.

Importing the remote PSTs into Exchange

Now that we’ve seen how to gain a list of machines and their respective PST files, we now need to import these files into Exchange. The following script does just that:

# Read in pst file locations and users

$strPSTFiles = Get-Content -Path "c:pstdetails.csv"

foreach($strPSTFile in $strPSTFiles)

{

      $strMachine = $strPSTFile.Split(‘,’)[0]

      $strPath = $strPSTFile.Split(‘,’)[1]

      $strOwner = $strPSTFile.Split(‘,’)[2]

      # Get network path for pst file

      $source = "\" + $strMachine + "" + $strPath.Replace(‘:’,’$’)

# import pst to mail box.

Import-Mailbox -PSTFolderPath $source -Identity $strOwner

New-MailboxImportRequest -FilePath $source -Mailbox $strOwner

}

Listing 3 – PowerShell to import a list of PST files into Exchange from their respective machines.

The yellow highlighted text shows the Exchange 2010 RTM Cmdlet, and the red shops the Exchange 2010 SP1 (delete as appropriate).

The Exchange 2010 SP1 version of the script will execute in far less time than the original RTM version due to the asynchronous nature of the ImportRequest cmdlet. These requests are processed in the background and can be monitored with the Get-MailboxImportRequest cmdlet to observe their status. Once these have completed, as mentioned earlier, it’s necessary to actively clear the requests with the Remove-MailboxImportRequest cmdlet. As easy as this all sounds, there are quite a few potential pitfalls here:

  • The users’ machines must be on,
  • File sharing must be on to allow for the file to be transferred,
  • Outlook must not be running on the remote users’ machines – If outlook is running and has the PST file attached, then the file will be locked and unavailable for importing,
  • Passwords are not supported – The PowerShell cmdlet used by exchange to import PST files simply doesn’t handle passwords.
  • There’s a limit on concurrent requests – With the SP1 asynchronous requests, no more than 10 concurrent requests can be handled per mailbox without manually specifying a unique name for each request (this makes the script a little more complicated, but is not a showstopper, particularly given that most users will only have a single errant PST file to be imported.)

That being said, there are various things you could to augment this script; some suggestions include:

  • Having WMI shut down Outlook on remote users’ machines before attempting import.
  • Generating a further output file detailing a list of all the PSTs which failed to import, with reasons why. It would be useful to know if these files were password protected, or the machine hosting them was shut down or had disconnected since they were identified.
  • In the SP1 case, you could automate the polling of the requests statuses, and the removal of those which have completed.

Summary

So, although it’s possible to search for an import your users’ PST files into Exchange from across the network, it’s not an easy or particularly well-documented process. Frustratingly, there are also elements of the process which are directly hampered by errors and glitches.

Although none of these problems are show-stoppers, they’ll raise your blood pressure if you don’t know about them! Hopefully this guide will set you on the right track and steer you around all but the most well-concealed pitfalls.

The Really Easy Way

As I mentioned at the start, although I’ve broken down the whole process into easy-to-follow steps and pointed out where you’ll need to pay extra attention, this is not, in fact, the easiest way of handling the PST Import process. If you’d rather negate the whole problem in one fell swoop, then there are 3rd party tools which will handle the whole import process for you in a quickly and smoothly, and which will allow you to manage every aspect the import at your convenience.

Resources:

Whilst I was investigating the background facts for this article, I found the following resources on the internet to be of interest:

At first I was Googling for ‘Importing PST files into Exchange’, the following pages on Experts Exchange and HowExchangeWorks.com proved to be an interesting read.

However, as stated in part 1 of this guide, one of the systems I was testing these processes on kept throwing errors when I was trying to execute the import-mailbox cmdlet. This page proved very helpful in identifying the issue I’d hit and suggesting a workaround.

I was then faced with the problem of actually locating the PST files on the network; I found a handy page on the MSExchangeTips blog, detailing how to query WMI for network.

Posted in Exchange, Powershell | Tagged: , , , , , , | 1 Comment »

10 Core Concepts that Every Windows Network Admin Must Know

Posted by Alin D on September 13, 2010

Introduction

I thought that this article might be helpful for Windows Network Admins out there who need some “brush-up tips” as well as those who are interviewing for network admins jobs to come up with a list of 10 networking concepts that every network admin should know.

So, here is my list of 10 core networking concepts that every Windows Network Admin (or those interviewing for a job as one) must know:

1.     DNS Lookup

The domain naming system (DNS) is a cornerstone of every network infrastructure. DNS maps IP addresses to names and names to IP addresses (forward and reverse respectively). Thus, when you go to a web-page like http://www.windowsnetworking.com, without DNS, that name would not be resolved to an IP address and you would not see the web page. Thus, if DNS is not working “nothing is working” for the end users.

DNS server IP addresses are either manually configured or received via DHCP. If you do an IPCONFIG /ALL in windows, you will see your PC’s DNS server IP addresses.


Figure 1: DNS Servers shown in IPCONFIG output

So, you should know what DNS is, how important it is, and how DNS servers must be configured and/or DNS servers must be working for “almost  anything” to work.

When you perform a ping, you can easily see that the domain name is resolved to an IP (shown in Figure 2).


Figure 2: DNS name resolved to an IP address

For more information on DNS servers, see Brian Posey’s article on DNS Servers.

2.     Ethernet & ARP

Ethernet is the protocol for your local area network (LAN). You have Ethernet network interface cards (NIC) connected to Ethernet cables, running to Ethernet switches which connect everything together. Without a “link light” on the NIC and the switch, nothing is going to work.

MAC addresses (or Physical addresses) are unique strings that identify Ethernet devices. ARP (address resolution protocol) is the protocol that maps Ethernet MAC addresses to IP addresses. When you go to open a web page and get a successful DNS lookup, you know the IP address. Your computer will then perform an ARP request on the network to find out what computer (identified by their Ethernet MAC address, shown in Figure 1 as the Physical address) has that IP address.

3.     IP Addressing and Subnetting

Every computer on a network must have a unique Layer 3 address called an IP address. IP addresses are 4 numbers separated by 3 periods like 1.1.1.1.

Most computers receive their IP address, subnet mask, default gateway, and DNS servers from a DHCP server. Of course, to receive that information, your computer must first have network connectivity (a link light on the NIC and switch) and must be configured for DHCP.

You can see my computer’s IP address in Figure 1 where it says IPv4 Address 10.0.1.107. You can also see that I received it via DHCP where it says DHCP Enabled YES.

Larger blocks of IP addresses are broken down into smaller blocks of IP addresses and this is called IP subnetting. I am not going to go into how to do it and you do not need to know how to do it from memory either (unless you are sitting for a certification exam) because you can use an IP subnet calculator, downloaded from the Internet, for free.

4.     Default Gateway

The default gateway, shown in Figure 3 as 10.0.1.1, is where your computer goes to talk to another computer that is not on your local LAN network. That default gateway is your local router. A default gateway address is not required but if it is not present you would not be able to talk to computers outside your network (unless you are using a proxy server).


Figure 3: Network Connection Details

5.     NAT and Private IP Addressing

Today, almost every local LAN network is using Private IP addressing (based on RFC1918) and then translating those private IPs to public IPs with NAT (network address translation). The private IP addresses always start with 192.168.x.x or 172.16-31.x.x or 10.x.x.x (those are the blocks of private IPs defined in RFC1918).

In Figure 2, you can see that we are using private IP addresses because the IP starts with “10”. It is my integrated router/wireless/firewall/switch device that is performing NAT and translating my private IP to my public Internet IP that my router was assigned from my ISP.

6.     Firewalls

Protecting your network from malicious attackers are firewalls. You have software firewalls on your Windows PC or server and you have hardware firewalls inside your router or dedicated appliances. You can think of firewalls as traffic cops that only allow certain types of traffic in that should be in.

For more information on Firewalls, checkout our Firewall articles.

7.     LAN vs WAN

Your local area network (LAN) is usually contained within your building. It may or may not be just one IP subnet. Your LAN is connected by Ethernet switches and you do not need a router for the LAN to function. So, remember, your LAN is “local”.

Your wide area network (WAN) is a “big network” that your LAN is attached to. The Internet is a humongous global WAN. However, most large companies have their own private WAN. WANs span multiple cities, states, countries, and continents. WANs are connected by routers.

8.     Routers

Routers route traffic between different IP subnets. Router work at Layer 3 of the OSI model. Typically, routers route traffic from the LAN to the WAN but, in larger enterprises or campus environments, routers route traffic between multiple IP subnets on the same large LAN.

On small home networks, you can have an integrated router that also offers firewall, multi-port switch, and wireless access point.

For more information on Routers, see Brian Posey’s Network Basics article on Routers.

9.     Switches

Switches work at layer 2 of the OSI model and connect all the devices on the LAN. Switches switch frames based on the destination MAC address for that frame. Switches come in all sizes from small home integrated router/switch/firewall/wireless devices, all the way to very large Cisco Catalyst 6500 series switches.

10. OSI Model encapsulation

One of the core networking concepts is the OSI Model. This is a theoretical model that defines how the various networking protocols, which work at different layers of the model, work together to accomplish communication across a network (like the Internet).

Unlike most of the other concepts above, the OSI model isn’t something that network admins use every day. The OSI model is for those seeking certifications like the Cisco CCNA or when taking some of the Microsoft networking certification tests. OR, if you have an over-zealous interviewer who really wants to quiz you.

To fulfill those wanting to quiz you, here is the OSI model:

  • Application – layer 7 – any application using the network, examples include FTP and your web browser
  • Presentation – layer 6 – how the data sent is presented, examples include JPG graphics, ASCII, and XML
  • Session – layer 5 – for applications that keep track of sessions, examples are applications that use Remote Procedure Calls (RPC) like SQL and Exchange
  • Transport – layer 4 -provides reliable communication over the network to make sure that your data actually “gets there” with TCP being the most common transport layer protocol
  • Network – layer 3 -takes care of addressing on the network that helps to route the packets with IP being the most common network layer protocol. Routers function at Layer 3.
  • Data Link – layer 2 -transfers frames over the network using protocols like Ethernet and PPP. Switches function at layer 2.
  • Physical – layer 1 -controls the actual electrical signals sent over the network and includes cables, hubs, and actual network links.

At this point, let me stop degrading the value of the OSI model because, even though it is theoretical, it is critical that network admins understand and be able to visualize how every piece of data on the network travels down, then back up this model. And how, at every layer of the OSI model, all the data from the layer above is encapsulated by the layer below with the additional data from that layer. And, in reverse, as the data travels back up the layer, the data is de-encapsulated.

By understanding this model and how the hardware and software fit together to make a network (like the Internet or your local LAN) work, you can much more efficiently troubleshoot any network. For more information on using the OSI model to troubleshoot a network, see my articles Choose a network troubleshooting methodology and How to use the OSI Model to Troubleshoot Networks.

Summary

I can’t stress enough that if you are interviewing for any job in IT, you should be prepared to answer networking questions. Even if you are not interviewing to be a network admin, you never know when they will send a senior network admin to ask you a few quiz questions to test your knowledge. I can tell you first hand, the questions above are going to be the go-to topics for most network admins to ask you about during a job interview. And, if you are already a windows network admin, hopefully this article serves as an excellent overview of the core networking concepts that you should know. While you may not use these every day, knowledge of these concepts is are going to help you troubleshoot networking problems faster.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »