Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘XML’

IIS Diagnostics Toolkit tutorial

Posted by Alin D on July 14, 2011

Introduction

Having issues with a Microsoft Internet Information Services (IIS) Web server? Want to know what those issues are? You can call an overpriced IIS consultant or troubleshoot it yourself using Microsoft’s free IIS Diagnostics Toolkit. But, using it effectively takes practice.

 In this resource guide, I offer an overview of the following seven utilities in the IIS Diagnostics Toolkit and provide instructions on how to download and use them:

  • SSL Diagnostics 1.0diagnoses problems related to Secure Sockets Layer (SSL) issues, such as missing certificate private keys, incorrect IIS metabase bindings, and other problems related to SSL failures. 
  • Authentication and Access Control Diagnostics (AuthDiag) 1.0diagnoses, checks and monitors permission problems and security issues for Web and FTP requests.

Exchange Server SMTP Diagnostics 1.0 gathers SMTP server data that helps diagnose problems with DNS or other possible important SMTP failures.

Log Parser 2.2 sifts through thousands of Event Viewer entries, IIS log files, NetMoncaptures and other log types.  

WFetch 1.4 troubleshoots problems that reside in your Web browser. Its GUI allows you to make a request to local or remote Web sites and see the raw HTTPrequest and response to diagnose problems.

 Trace DiagnosticsService Pack 1 for Windows Server 2003 can monitor the requests to an IIS Web server in real time, or follow a request throughout the IIS processing pipeline to find failures.  

Debug Diagnostics 1.0 offers both a user interface and command-line for troubleshooting Web application failures like crashes, hangs and memory leaks.For developers, there is also an extensibility model aimed at reducing the time it takes to resolve complex Web application failures.

How to install the Microsoft IIS Diagnostics Toolkit

To install the IIS Diagnostics Toolkit suite:

  1. Download the appropriate platform-specific version for your system environment:

     

  2. Now download and save the iisdiag.msifile to your workstation or server. (You typically put these directly on the system you’re going to be troubleshooting.) 
  3. Double click on iisdiag.msi and then click Next at the IIS Diagnostics Toolkit Wizard. 
  4. Accept the terms of the License Agreement (required). 
  5. Accept the default username and organization or change and click Next. 
  6. Choose the installation type and click Next again. 
  7. Click Install.

You also have the option to automate installation of the IIS Diagnostics Toolkit:

  1. Download the ssldiag.msifile. 
  2. Click Start -> Run
  3. Enter cmd.exe and click Run
  4. Change to the download directory where iisdiag.msi is located (i.e., cd iisdiagnostics). 
  5. At the command prompt, type: msiexec /i “ssldiag.msi” /qbr-

The toolkit should install with no user interaction. After the installation is complete, IIS Diagnostics tools will appear in the Programs menu under IIS Diagnostics.

How to use SSL Diagnostics 1.0 to identify issues in IIS

The SSL Diagnostics utility helps troubleshoot problems for SSL-enabled Web sites. It is particularly useful for identifying configuration problems in the Internet Information Services (IIS) metabase, certificates, or certificate stores. You can run this tool using the GUI interface or set it up to run silently and just write the information to a log file.

Key features of the SSL Diagnostics tool include:

  • Certificate Creator: This feature lets admins replace existing server certificates with self-signed server certificates generated by SSL Diagnostics. The functionality is available with IIS 5.0, IIS 5.1 and IIS 6.0. Certificate Creator does not delete your existing certificates, but temporarily replaces the current certificate with a self-signed certificate. When testing is complete, an administrator can restore the original certificate back into IIS.Certificate Creator can help you determine if your SSL problems are related to your Windows server certificate, as well as detect problems with certificates purchased from third-party certification authorities. If SSL works with the self-signed certificate but did not work with the other certificate, it’s surely a certificate problem. If SSL does not work with the self-signed certificate or the other certificate, it’s not a certificate problem. You can then restore the original certificate, which automatically removes the self-signed one.

     

  • SSL handshake: SSL Diagnostics lets admins quickly simulate an SSL connection between a Windows server and Web browser. This is known as an SSL handshake. When implemented, SSL Diagnostics opens a new window that shows the connection information from the client’s point of view, meaning the information the Web browser receives. If there is a problem with the SSL handshake, a warning will appear that describes the problem. This feature helps determine where the connection is breaking down during the SSL handshake process. You can simulate an SSL handshake at the Web-site or Web-page level. 
  • Client Certificate Monitor: You can use SSL Diagnostics to monitor the usage of client certificates in real time by attaching to the associated process where the encryption and decryption takes place. As the certificate information is being parsed by the server, Client Certificate Monitor displays both the client certificates that are trying to connect to your Web site and the associated information contained in those certificates. Client Certificate Monitor also shows the error codes associated with the result of the SSL server settings and client certificates. So Client Certificate Monitor displays both valid certificates and the reasons for invalid certificates, including expired, not yet valid, or revoked client certificates.Although useful, Client Certificate Monitor requires some real-time interaction with the server processes. Because of the impact it can have on performance, using it is not recommended on a production server. After using Client Certificate Monitor, you should restart the server.

When you go to Programs -> IIS Diagnostics -> SSL Diagnostics to open the program, the utility will begin a diagnostic scan of the server on which you are running it. In the results section, just highlight the line entry you wish to research (especially those with red exclamation points) and SSL Diagnostics will give you the issue’s explanation and possible fixes to correct the problem.

How to use Authentication and Access Control Diagnostics (AuthDiag) 1.0

Having issues with clients logging into your Internet Information Services (IIS) Web site? If so, don’t be surprised. Authentication and authorization failures are quite common in IIS. That’s why Microsoft created the AuthDiag tool to troubleshoot and determine the cause of these issues.

AuthDiag analyzes IIS metabase configuration and system-wide policies, warns users of possible points of failure, and guides them through problem resolution.

AuthDiag 1.0 also includes a monitoring tool called AuthMon that captures snapshots of problems while as they occur in real time. You can run AuthMon using the GUI interface or set it up to run silently and just write the information to a log file.

Here are a few example issues that this tool can help you correct:

  • 401.1 authentication failures 
  • 401.3 failed ACL on file or directory 
  • Failed authentication because of incorrect privileges in token 
  • Failure to access a page based on metabase configuration 
  • Kerberos failure when worker processes use custom identities 
  • FTP user isolation — file system configured incorrectly 
  • Removal of system permissions for process identities causes “Access Denied” error

To use the AuthDiag utility, go to Programs -> IIS Diagnostics -> AuthDiag, select the Task you want to run and the site, and then click Start Diagnostics. You will be presented with a screen that shows you the results of the scan. Just highlight the line entry you wish to research (especially those with red X’s). The program will offer you a link to Microsoft, where you can hopefully find a resolution for that particular issue.

How to use SMTP Diagnostics 1.0 to troubleshoot IIS issues

Written by the Microsoft Exchange Server team, the SMTPDiag utility will benefit anyone with an SMTP-based environment. It’s simple to use and run and has an excellent analysis engine. SMTPDiag is a command-line troubleshooting tool that works directly on a Windows server with IIS/SMTP service enabled or with Exchange Server installed. It utilizes the same APIs as Windows Server and Microsoft Exchange in order to diagnose configuration and connection issues involving SMTP and DNS.

Per Microsoft:

“SMTPDiag issues DNS queries using both User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) to validate that the queries will succeed. If TCP queries fail, mail will not be delivered successfully.

The first thing that SMTPDiag does after verifying syntax is to check the Start of Authority (SOA) record for the remote address domain. The next step is to validate that the local domain MX/A records are resolvable. This test verifies that the sender domain is valid and any bounces can return to the originating server. This test could fail if the domain is not resolvable from inside the firewall. The remote domain MX/A records are then checked also. If this step fails, mail will not route because of issues with DNS.

At this point, the network DNS infrastructure must be investigated. When all the DNS records have been successfully queried, the tool will try to connect to all the MX (mail exchange) records that were published for the remote domain on port 25 and try to do an EHLO, mail from, rcpt to, and quit command.”

Key features

  • Makes test requests based on user-inputted information, such as sender and receiver
  •  
  • Validates that DNS MX records are configured correctly
  •  
  • Ensures that the correct firewall or proxy ports are open for SMTP to properly work
  •  
  • Ensures the correct permissions on the relevant SMTP directories such as BadMail, Pickup, etc.

To use the SMTP Diagnostics utility:

  1. Go to Start -> Run.>p>
  2. Type command and hit Enter.
  3.  
  4. Once the command window is open, type: cd
  5. Now type: cd “Program FilesIIS ResourcesSMTPDiag”

     

  6. Finally, enter the following syntax to use the tool:
  7. SMTPDIAG “sender address” “recipient address” [-d target DNS] [/v]

Below is a look at the results. (Email servers, email addresses, DNS servers, etc. are blocked out though, of course.)

How to use the IIS Diagnostics Toolkit’s Log Parser 2.2

Need a way of parsing through data, such as Internet Information Services (IIS) log files, the Windows registry, and Active Directory? The Log Parser 2.2 utility lets you query and sift through thousands of files and data sources.

Per Microsoft: “Log parser is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. You tell Log Parser what information you need and how you want it processed. The results of your query can be custom-formatted in text based output, or they can be persisted to more specialty targets like SQL, SYSLOG, or a chart.”

The Log Parser tool is available as a command-line executable (LogParser.exe) and as a set of scriptable COM objects (LogParser.dll). The two binaries are independent from each other; if you want to use only one, you do not need to install the other file on your computer.

Key features:

 

  • Log Parser’s built-in Input Formatscan retrieve data from the following sources:
    • IIS log files (W3C, IIS, NCSA, Centralized Binary Logs, HTTP Error logs, URLScan logs, ODBC logs)
    • Windows Event Log
    • Generic XML, CSV, TSV and W3C – formatted text files (e.g. Exchange Tracking log files, Personal Firewall log files, Windows Media® Services log files, FTP log files, SMTP log files, etc.)
    • Windows registry
    • Active Directory Objects
    • File and Directory information
    • NetMon .cap capture files
    • Extended/Combined NCSA log files
    • ETW traces
    • Custom plug-ins (through a public COM interface)
  • A SQL-like engine core processes the records generated by an Input Format, using a dialect of the SQL language that includes common SQL clauses (SELECT, WHERE, GROUP BY, HAVING, ORDER BY), aggregate functions (SUM, COUNT, AVG, MAX, MIN), and a rich set of functions (e.g. SUBSTR, CASE, COALESCE, REVERSEDNS, etc.); the resulting records are then sent to an Output Format.
  •  
  • Output Formats are generic consumers of records; they can be thought of as SQL tables that receive the results of the data processing. Log Parser’s built-in Output Formats can:
    • Write data to text files in different formats (CSV, TSV, XML, W3C, user-defined, etc.)
    • Send data to a SQL database
    • Send data to a SYSLOG server
    • Create charts and save them in either GIF or JPG image files
    • Display data to the console or to the screen

    Unfortunately, LogParser is so powerful and flexible that I cannot easily show you how to use it. When using the command-line executable, Log Parser works on commands supplied by the user. Each command has five components:

    1. The Input Format to use
    2. Optional parameters for the Input Format
    3. The Output Format to use
    4. Optional parameters for the Output Format
    5. The SQL query that processes the records generated by the Input Format and produces records for the Output Format

    Microsoft provides the following Windows Event Log example in its documentation, but there are MANY potential uses for this tool:

    C:>LogParser “SELECT TimeGenerated, SourceName, EventCategoryName, Message INTO report.txt FROM Security WHERE EventID = 528 AND SID LIKE ‘%TESTUSER%'” -resolveSIDs:ON

    For more information on using the Log Parser utility, check out the following resources:

     

     

  • Log Parser examples
  •  How Log Parser works

How to use Trace Diagnostics 1.0 to troubleshoot IIS issues

HTTP requests often go into Isapi filters, and either never come back out, or become extremely slow or unresponsive. Using the underlying Enterprise Tracing for Windows (ETW), the Trace Diagnostics utility simplifies HTTP request-tracing. It includes several components, including IISREQMON, IISTRACE for the command-line and IIS Request Viewer (User Interface). The toolset will only install on Service Pack 1 or higher versions of Windows.

 

When an Internet Information Services (IIS) worker process has become unresponsive or slow, Trace Diagnostics is used to understand what requests are executing in that worker process. When an HTTP request is picked up by IIS, it may be handled by multiple components before a response is generated back to the client computer. If a request fails or becomes unresponsive as it makes its way through these components, error-reporting channels like the Windows Event Log and the HTTP Error Log might not provide enough detail to help you locate the source of the problem. That’s when tracing comes in handy.

Key features

  • Request Monitoris a command-line helper tool for working with currently executing requests in IIS 6.0 running on Microsoft Windows Server 2003 SP1 or later. It will help you determine what requests are executing in a worker process when it has become slow or unresponsive. When enabled, Request Monitor prompts worker processes to report statistics and details about every request in each worker process. 
  • IISTraceis a command-line helper tool for requesting tracing in IIS 6.0 running on Microsoft Windows Server 2003 SP1 or later. IIStrace goes beyond iisreqmon. It allows you to view requests in an apppool and trace the events that occur in order to process those requests. This means you can identify flaws in customer code and provide them with guidance on making corrections. 
  • IIS Request Viewer is a GUI that displays all currently running application pools, threads and currently executing requests per application pool. It uses the underlying tracing commands to create, start, refresh and stop traces. It will display the Request ID, Client IP, State of the request and Time that the request has been taking.

These components of the Trace Diagnostics utility can help you determine root causes of HTTP request issues, specify which providers report events to ETW during a specific trace session, customize how much trace data providers report to ETW, determine which URLs to trace, thereby focusing your troubleshooting efforts on a specific application.

From a high level, request-based tracing works like this:

  1. The administrator enables a tracing session on the IIS server from a command prompt. 
  2. Enterprise Ttracing for Windows (ETW) notifies IIS providers to begin reporting trace events. 
  3. A request enters the IIS worker process. 
  4. The administrator stops the trace session and reviews the trace log to locate the source of the problem. 
  5. Problems are typically identified by error events or a START event that does not contain a corresponding END event.

Since there are multiple tools within the Trace Diagnostics utility, each with their own specific attributes, I will only focus on using the GUI tool, IIS Request Viewer. To trace requests in using the IIS Trace tool:

  1. Go to Start-> Programs -> IIS Diagnostics (32bit) -> Trace Diagnostics -> IIS Request Viewer -> reqviewer.exe. 
  2. Click File -> Retrieve Requests. 
  3. Expand the left-hand navigation tree to view the different application pools running on your system. 
  4. View the requests in the right-hand window.

If you get the common error below, set your TEMP system variable to something shorter, like C:Temp.

ERROR: Open TraceLogFile() failed (Win32 error -largeinteger – The path specified is invalid)

For more information, read Trace Diagnostics Known Bugs and How to Fix ‘Em.

How to use Debug Diagnostics to troubleshoot IIS issues

Quoting Microsoft: “The Debug Diagnostics 1.1 tool is designed to help troubleshoot performance issues in any Win32 user-mode process. For example, the tool can help you troubleshoot an application that stops responding (hangs) or crashes, performs slowly, leaks memory or has memory fragmentation issues. The tool also includes debugging scripts for IIS applications, Web data access components and Microsoft COM+ applications.

Before you start Debug Diagnostics 1.1, you must identify the type of issue you are experiencing. For example, determine whether the application stops responding, crashes, performs slowly, or leaks memory. After you know the kind of issue, you can configure the Debug Diagnostics 1.1 tool to gather the correct data. Then, you can use the data to determine and resolve the cause of the problem.”

The Debug Diagnostics 1.1 tool includes three views:

  • Rules:Uses a wizard to create control scripts for the debugger host. 
  • Advanced analysis:Runs the selected analysis script on one or more memory dump files. 
  • Processes: Displays the status of running processes and scripts.

How to use the Debug Diagnostics utility

  1. Go to Start -> Run, type the path of the Debug Diagnostics 1.0 tool, and then click OK. (By default, the Debug Diagnostics 1.0 tool is located inC:Program FilesIIS ResourcesDebugDiag.) If the Select Rule Type dialog box appears, click Cancel.

 

  • Select the memory dump file that you want to analyze: Go to Start -> Run, type the path of the Debug Diagnostics 1.0 tool, and then click OK. Click the Advanced Analysis tab, and then click Add Data Files. Now select the memory dump file that you want to analyze and click Open.

 

 

  • Configure the path for the symbol files: Navigate to Start -> Run, type the path of the Debug Diagnostics 1.0 tool, and then click OK. On the Tools menu, click Options and Settings. Go to the Folders and Search Paths tab, type the following path in the Symbol Search Path for Analysis box, and then click OK: srv*filepath*http://msdl.microsoft.com/download/symbols(Filepath is a placeholder for the folder or for the UNC share where you want to store the downloaded symbol files. By default, the symbol files are stored in the C: Symcache folder. Additionally, you should know that you cannot browse the http://msdl.microsoft.com/download/symbols Web site — only debugging tools can access this Web site.) 
  • Start the analysis: Go to Start -> Run, type the path of the Debug Diagnostics 1.0 tool, and then click OK. Navigate to the Advanced Analysis tab -> Available Analysis Scripts, and select the type of analysis that you want. For example, if you created the memory dump file because a process stopped responding, click Crash/Hang Analyzers. If you created the memory dump file to troubleshoot a memory leak issue, click Memory Pressure Analysis. Now, under Data Files, click the memory dump file that you selected in step 3. Click Start Analysis.

 

How to read a Debug Diagnostics report

After completing the steps above, you can review the Debug Diagnostics report that is displayed in Microsoft Internet Explorer. A copy of the report is also stored in the following folder: C: Program Files IIS Resources DebugDiagReports. The report is broken down into the following sections:

  • Analysis Summary: In this section, the detected issues are classified as errors, warnings, or information. Each error includes a description. Additionally, the Analysis Summary contains recommendations for how to resolve the issues. The recommendations may include reviewing a Microsoft Knowledge Base article, contacting the application vendor, or contacting Microsoft Product Support Services. Suggestions to the application developer may also be provided.
  • Analysis Details: This section provides a detailed analysis of the information in the memory dump file.

 

  • Script Summary: This section provides a report on the status of the script (Iisanalysis.asp) that is used to analyze the memory dump file. If an error occurs when the script is running, the Script Summary reports the error code, the source, the destination, and the lines of code that cause the error.

 

For more information on using the Debug Diagnostics utility, I recommend the following resources:

  • How to use the Debug Diagnostics 1.0 Tool to analyze a memory dump file
  • How to use the Debug Diagnostics tool to troubleshoot an IIS process that stops unexpectedly 
  • How to use the IIS Debug Diagnostics tool to troubleshoot a memory leak in an IIS process 
  • How to use the Debug Diagnostics Tool to troubleshoot high CPU usage by a process in IIS
  • How to use the Debug Diagnostics tool to troubleshoot a process that has stopped responding in IIS
  •  

    Posted in TUTORIALS | Tagged: , , , , , , | 1 Comment »

    Exchange 2010 OWA mailbox policies and feature segmentation

    Posted by Alin D on June 12, 2011

    Exchange 2010 supports the ability to allocate different levels of functionality to OWA users through policies. Although Exchange 2010 includes a default OWA policy, it is not actually applied to mailboxes unless you explicitly select the mailbox and apply the policy to it. Otherwise access to OWA features is controlled by the segmentation properties defined for the OWA virtual directory on each CAS server.


    OWA mailbox policies didn’t exist in Exchange 2007, and the only way that you could segment functionality was through the properties of the OWA Web site. The problem with this approach is that any change applies to all mailboxes that connect to that CAS. Using policies allows more granular control because you can apply different policies at the level of an individual mailbox. In addition to their ability to segment features presented through OWA, OWA mailbox policies control some of the user-controllable settings available through ECP.

    The easiest way to apply any OWA policy, including the default policy, to a set of mailboxes is with the Set-CASMailbox cmdlet. For example, this command fetches all the

    mailboxes that belong to the Exchange Users organizational unit (OU) and pipes them to Set-CASMailbox to apply the default OWA mailbox policy:

    Get-Mailbox –OrganizationalUnit ‘Exchange Users’ | Set-CASMailbox -OwaMailboxPolicy ‘Default’

    The default OWA policy typically duplicates the default out-of-the-box segmentation properties of the OWA default Web site as installed on a CAS server and permits access to all OWA features, including the premium client. To create a new policy, go to the Organization Configuration section of EMC, select Client Access, then on the Outlook Web App Mailbox Policies tab, and select the New Outlook Web App Mailbox Policy option in the action pane. A wizard then allows you to select which features you want users to access.


    In this case, we create a policy to restrict access to the OWA Light version that also selectively disables some OWA features.

    Some of policy restriction features depend on other components (text messaging, public folders, and instant messaging), and others require a really good reason before you disable them. For example, it usually doesn’t make much sense to disable the Change Password feature because handling user requests to change their passwords creates extra work for help desks.

    A new policy can also be created with EMS. For whatever reason, this is a two-step process. First, you create the new policy with the New-OWAMailboxPolicy cmdlet, and then you use the Set-OWAMailboxPolicy cmdlet to define what features are enabled or disabled by the policy. For example, here’s a policy that allows users to use the premium client while removing some of the more esoteric features:

    New-OWAMailboxPolicy -Name ‘Limited OWA features’

    Set-OWAMailboxPolicy -Identity ‘Limited OWA features’

    -ActiveSyncIntegrationEnabled $True -AllAddressListsEnabled $True

    -CalendarEnabled $True -ContactsEnabled $True -JournalEnabled $True

    -JunkEmailEnabled $True -RemindersAndNotificationsEnabled $True

    -NotesEnabled $True -PremiumClientEnabled $True -SearchFoldersEnabled $False

    -SignaturesEnabled $True -SpellCheckerEnabled $True -TasksEnabled $True

    -ThemeSelectionEnabled $False -UMIntegrationEnabled $False

    -ChangePasswordEnabled $True -RulesEnabled $True -PublicFoldersEnabled $False

    -SMimeEnabled $True -RecoverDeletedItemsEnabled $True

    -InstantMessagingEnabled $False -TextMessagingEnabled $False

    More than just segmentation

    Although feature segmentation is the most obvious use of OWA mailbox policies and receives the most attention, you can also control other aspects of how users work with

    OWA through these policies. After you create a new OWA mailbox policy, you are able to define rules for file access and download when OWA is run on private and public computers. Click the policy with which you want to work and then select Properties. You can then access the properties that control feature segmentation and two other tabs for Public Computer File Access and Private Computer File Access.

    The Direct File Access settings


    allow you to control how various file types are opened by users through OWA. The default option for both public and private computers is to allow direct access, meaning that users are able to open files. However, all types of files are not treated equally, as there are some file types that pose a potential risk of infection because they are often used as threat vectors by hackers who wish to infiltrate a computer. Files are therefore grouped into four categories:

    Always Allow: These files are deemed to be innocuous and safe to open on the client computer. The list includes types such as Word documents (.doc and .docx extensions)

    and Windows bitmaps (.bmp extension) that you can be reasonably sure will not contain malicious code.

    Always Block: These files pose a significant risk to a computer when they are opened by a user because they contain executable code. These files include types such as

    Windows batch files (.bat extension) and Windows command files (.cmd extension).

    Force Save: These are files that users cannot open directly and must save to disk before they can access the content. These types include Windows compiled help files (.chm extension).

    All others (unknown files that are not included in the other lists): The policy states what should be done if an unknown file type is detected. The default is to force a save to disk.

    The priority given to action is from top to bottom. In other words, if a file type is on both the Always Block and the Force Save lists, it will be blocked.

    If you prefer to have users open a viewer to access files rather than running the native application, you can select the Force WebReady Document Viewing When A Converter Is Available option. The effect is to force OWA to check documents as they are opened to see whether a WebReady converter is available and, if so, to always use the converter to open the file rather than calling the application. The idea is to eliminate any potential risk from macros or other code that could be carried around in the common file formats supported by WebReady, such as Microsoft Word and Microsoft Excel. In truth, the antivirus software that runs on today’s PCs will usually catch any malicious code, so forcing WebReady viewing for OWA when it is run on a private computer could be considered overkill. Next image shows how to access the list of file formats supported by WebReady converters. This list has been augmented over the last few years and supports a reasonably full set of the most common file formats that users will need to open in office environments.

    It might be safe to allow users to open documents with applications on private computers, but it’s a different matter on computers that are used for public access. In this scenario, it is reasonably common to block access to attachments to avoid the risk that users might download and leave sensitive files on a computer that can be accessed by an unauthorized individual. You can do this by clearing the option through EMC or by running the Set-OWAMailboxPolicy cmdlet. Settings applied through an OWA mailbox policy override those set through the properties of the OWA virtual directory. For example:

    Set-OWAMailboxPolicy –id ‘Restricted Users – OWA Light’

    -DirectFileAccessOnPublicComputersEnabled $False

    -ForceWebReadyDocumentViewingFirstOnPublicComputers $True

    When this policy is applied, users will not be able to open or download and save files on public computers, but they will be able to access the content if a WebReady viewer is available.

    Web links that are included in messages are still active. Exchange 2010 includes viewers for Microsoft Office documents (next image), RTF, and PDF files.

     

    Attachment processing

    Administrators control how OWA handles attachments by creating a list of attachment types and marking each as blocked, allowed, or “force to save.” Obviously, blocked means that users cannot open or download an attachment of this type to their PC, normally because the file type is likely to contain a virus or some other dangerous content. Allowed means the opposite, as there is a high degree of confidence that these attachments are safe.

    OWA performs special processing for attachments marked as force to save. This means that the user has to save the attachment to his local disk before he can view its contents. As OWA downloads the attachment from the server, it checks to see whether it is XML or HTML. In this case, OWA runs some code called Safe HTML to strip out any malicious XML or HTML code. If the attachment is another type, OWA examines the content to see if it actually contains XML or HTML code. This check is performed to ensure that no attachment is ever downloaded that contains malicious code, which could introduce a virus or another dangerous program onto the PC. If hidden XML or HTML code is detected, OWA strips the attachment and replaces it with a text file to tell the user that the attachment was removed.

    Applying an OWA mailbox policy

    After the new policy is created, to apply it, you switch to Recipient Configuration and select one or more mailboxes and then Properties from the action pane. Click the Mailbox Features tab, select Outlook Web App, and then select Properties. You can then select an Outlook Web App mailbox policy and apply it to the mailbox.

    Exchange enforces the new policy the next time that the user logs into her mailbox. If everything works as expected, the user will be presented with a restricted version of OWA Light. Of course, you can also apply an OWA mailbox policy to a mailbox with EMS:

    Set-CASMailbox –Identity ‘Andrews, Ben (IT)’ –OWAMailboxPolicy ‘Restricted Users–OWA Light’

    Integrating OWA and OCS

    One small glitch might creep in with the instant messaging section of the policy. OWA 2010 supports a nice integration with Office Communications Server (OCS), but if you want to create the link between the two products, you have to ensure that the OWA mailbox policy that is applied to mailboxes that want to use OCS specifies “OCS” in the InstantMessagingType attribute. For example:

    Set-OWAMailboxPolicy –Identity ‘OCS Integration Enabled’ –InstantMessagingType ‘OCS’ –InstantMessagingEnabled $True

    Set-CASMailbox –Identity ‘Akers, Kim’ –OWAMailboxPolicy ‘OCS Integration Enabled’

    Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , | 1 Comment »

    How to manage Hyper-V with powershell

    Posted by Alin D on June 10, 2011

    Many admins use PowerShell to automate components like user creation and folder permissions, but virtualization technologies can also be managed from the command line, including Microsoft Hyper-V.

    While there are several ways to manage Hyper-V with PowerShell, this article will focus on the free approaches using Windows Management Instrumentation (WMI) scripting and an open source tool from CodePlex.

    Before using WMI scripting to manage Hyper-V, it’s important to understand what classes are available. Microsoft’s list includes a significant number of classes and while is fairly complete, they are not necessarily easy to use and are certainly not intuitive. Therefore, using WMI to manage Hyper-V is not for the faint of heart.

    One of the more popular methods for managing Hyper-V with PowerShell is with PowerShell Management Library for Hyper-V (PSHyperV) a free, open source CodePlex project written by James O’Neil. This is by far the best free option out there and gives administrators a very thorough collection of cmdlets that do everything from virtual machine inventory to virtual network management. Let’s touch on a few of them:

    Get-VM — returns all the virtual machines on a given Hyper-V server

    The following code demonstrates the Get-VM command:
    Function Get-VM
    {# .ExternalHelp MAML-VM.XML
    param(
    [parameter(ValueFromPipeLine = $true)]
    [ValidateNotNullOrEmpty()][Alias("VMName")]
    $Name = "%",

    [parameter()][ValidateNotNullOrEmpty()]
    $Server = ".", #May need to look for VM(s) on Multiple servers
    [Switch]$Suspended,
    [switch]$Running,
    [switch]$Stopped
    )
    Process {
    # In case people are used to the * as a wildcard...
    if ($Name.count -gt 1 ) {[Void]$PSBoundParameters.Remove("Name")
    ; $Name | ForEach-object {Get-VM -Name $_ @PSBoundParameters}}
    if ($name -is [String]) {
    $Name = $Name.Replace("*","%")
    # Note in V1 the test was for caption like "Virtual%" which
    did not work in languages other than English.
    # Thanks to Ronald Beekelaar - we now test for a processID ,
    the host has a null process ID, stopped VMs have an ID of 0.
    $WQL = "SELECT * FROM MSVM_ComputerSystem WHERE ElementName
    LIKE '$Name' AND ProcessID >= 0"
    if ($Running -or $Stopped -or $Suspended) {
    $state = ""
    if ($Running) {$State += " or enabledState = " +
    [int][VMState]::Running }
    if ($Stopped) {$State += " or enabledState = " +
    [int][VMState]::Stopped }
    if ($Suspended) {$State += " or enabledState = " +
    [int][VMState]::Suspended }
    $state = $state.substring(4)
    $WQL += " AND ($state)"
    }
    Get-WmiObject -computername $Server -NameSpace $HyperVNamespace -Query $WQL | Add-Member -MemberType ALIASPROPERTY -Name "VMElementName" -Value "ElementName" -PassThru
    }
    elseif ($name.__class) {
    Switch ($name.__class) {
    "Msvm_ComputerSystem" {$Name}
    "Msvm_VirtualSystemSettingData" {get-wmiobject -
    computername $Name.__SERVER -namespace $HyperVNamespace -Query
    "associators of {$($name.__path)} where
    resultclass=Msvm_ComputerSystem"}
    Default get-wmiobject -
    computername $Name.__SERVER -namespace $HyperVNamespace -Query
    "associators of {$($Name.__path)} where
    resultclass=Msvm_VirtualSystemSettingData" |
    ForEach-Object
    {$_.getRelated("Msvm_ComputerSystem")} | Select-object -unique }
    }
    }
    }
    }

    As you can see, the code basically wraps the WMI class with some helper logic and reports the results.

    Get-VMSwitch — Returns all the virtual switches on the Hyper-V server

    Function Get-VMSwitch
    {# .ExternalHelp MAML-VMNetwork.XML
    param(
    [parameter(ValueFromPipeline = $true)][Alias("Name")]
    [String]$VirtualSwitchName="%",

    [parameter()][ValidateNotNullOrEmpty()]
    $Server = "." #Can query multiple servers for switches
    )
    process {
    $VirtualSwitchName=$VirtualSwitchName.replace("*","%")
    Get-WmiObject -computerName $server -NameSpace $HyperVNamespace
    -query "Select * From MsVM_VirtualSwitch Where elementname like '$VirtualSwitchname' "
    }
    }

    Get-VMSnapShot — Provides all the snapshots on the Hyper-V server

    The following command demonstrates the Get-VMSnapShot command:

    Function Get-VMSnapshot
    {# .ExternalHelp MAML-VMSnapshot.XML
    Param(
    [parameter(Position=0 , ValueFromPipeline = $true)]
    $VM = "%",

    [String]$Name="%",

    [parameter()][ValidateNotNullOrEmpty()]
    $Server="." ,
    [Switch]$Current,
    [Switch]$Newest,
    [Switch]$Root
    )
    process{
    if ($VM -is [String]) {$VM=(Get-VM -Name $VM -Server $server) }
    if ($VM.count -gt 1 ) {[Void]$PSBoundParameters.Remove("VM") ; $VM |
    ForEach-objectGet-VMSnapshot -VM $_ @PSBoundParameters}}
    if ($vm.__CLASS -eq 'Msvm_ComputerSystem') {
    if ($current) {Get-wmiobject -computerNam $vm.__server -
    Namespace $HyperVNamespace -q "associators of {$($vm.path)} where assocClass=MSvm_PreviousSettingData"}
    else {$Snaps=Get-WmiObject -computerName $vm.__server -NameSpace $HyperVNameSpace -Query "Select * From MsVM_VirtualSystemSettingData Where systemName='$($VM.name)' and
    instanceID <> 'Microsoft:$($VM.name)' and elementName like '$name' "
    if ($newest) {$Snapssort-object -property
    creationTimeselect-object -last 1 }
    elseif ($root) {$snapswhere-object {$_.parent -eq
    $null} }
    else {$snaps}
    }
    }
    }
    }

    PSHyperV includes several additional functions to help admins perform related tasks, including finding, manipulating and configuring different components of the hypervisor and can be found on the CodePlex website.

    Writing WMI wrappers and using PSHyperV are just a few of the ways admins can manage Hyper-V using PowerShell. Note that the latest release of PSHyperV isn’t a complete version, and thus, isn’t as stable as other options may be.

     

     

    Posted in Powershell | Tagged: , , , , , , , , , , , | Leave a Comment »

    Working with SQL Azure Databases fromVisual Studio 2008

    Posted by Alin D on May 29, 2011

    SQL Azure Services fall under the umbrella of the Windows Azure Platform. Windows Azure Platform is one of the four online services that include The Windows Azure storage and compute, and Azure AppFabric offered by Microsoft. The other services are Bing, Live, and Microsoft Advertising.

    In this chapter, we will be looking at how SQL Azure data may be accessed from the client premises by applications using well-known Microsoft technologies for data access. The emphasis will be more on administering the SQL Azure databases using the client and server APIs. We will also discuss the best practices of accessing data from the client. For the practical elements of this chapter, we will be mostly using Visual Studio 2008 SP1, and with some comments on using Visual Studio 2010. The following topics will be covered:

    • SQL Azure architecture
    • Microsoft data access technologies
    • Easy way to connect to SQL Azure with Microsoft data access technologies
    • Preferred way to connect to SQL Azure
    • Connecting to SQL Azure using server APIs
    • Creating database objects using ADO.NET

    SQL Azure architecture

    SQL Azure has a four-layered architecture, a Client Layer followed by a Services Layer, a Platform Layer, and finally the Infrastructure Layer. A schematic of the architecture is shown in the following diagram:

    The Infrastructure Layer has all the associated hardware in the Microsoft Data Center as described in Chapter 1, Cloud Computing and Microsoft Azure Services Platform, managed automatically by the SQL Azure Fabric. The Platform Layer, which is above the Infrastructure Layer, has all the SQL Server clusters managed by the fabric as well as the Management Services. The Services Layer above the Platform Layer does all of the business logic, provisioning, connection routing, billing, and so on. The Client Layer above the Platform Layer can be at the client site or at the client’s hosted site with all the applications that need access to SQL Azure.

    Microsoft client technologies such as ODBC, ADO.NET, residing in the client site (or, in the Windows Azure Platform) converse with SQL Azure using TDS while WCF Data Services uses HTTP/REST. Applications such as PHP, SQL Server Applications and Tools, and WCF Data Services can work with ODBC and ADO. NET. Hence, TDS is the carrier of choice for information from SQL Server Tools, as well as the various applications.

    The four-layered architecture shown in the previous diagram is based on the Microsoft documentation. As seen here, all applications access the Services Layer using TDS+Secure Sockets Layer (SSL) mediated by either ODBC or ADO.NET. The Services Layer routes the client requests to the Platform Layer where the SQL Server comes in, to process the client requests mediated by the SQL Azure Fabric.

    The following diagram shows the TDS Gateway Layering in SQL Azure with some more details, copied, with permission, from a PowerPoint presentation by Jeff Currier to PDC 2009. This corresponds to the Services Layer of the previous image.

    The Gateway layer enforces the authentication and authorization policy. This security enforcing layer isolates the application from the databases. The TDS Gateway takes care of a number of tasks, such as provisioning the endpoint, administering the AdminService, connection management, and so on. SQL Azure login requests arrive at the Gateway, which then accesses the Master and User DBs. Credentials to the User DB are matched to those on the Master and, after validation, a TDS session opens to forward requests to User DB. Make sure you watch the video presentation (Microsoft SQL Azure Database: Under the Hood) by Jeff Currier at the PDC here: http://microsoftpdc.com/Sessions/SVC12.

    Application access to SQL Azure

    Onsite client applications access the SQL Azure Services (databases) on the cloud using standard client libraries such as ODBC, ADO.NET (using TDS protocol), and so on. What this means, is that, all of these technologies are familiar to application developers and they are not required to learn new developmental techniques. In addition to Microsoft technologies, open source programming languages can also be used to develop Azure applications. We will look at the details of how client libraries access SQL Azure in this chapter. We will also learn how to manipulate database objects after accessing SQL Azure using T-SQL.

    The client applications using TDS protocol arrive at the SQL Azure databases after passing through a load balancer, which forwards the TDS packets to the TDS Gateway layer, which then passes them on to the SQL Azure databases.

    TDS and SQL Azure

    Tabular Data Stream (TDS) was the technology originally created by Sybase to allow applications to access data stored in relational tables. Before SQL Azure came into existence, its predecessor SQL Data Services (SDS) was only able to access data using HTTP(s) or REST. Leveraging TDS, Microsoft skillfully morphed SDS into SQL Azure, so that the SQL Servers can be accessed in their native protocol and T-SQL code can be run in the cloud.

    Presently, as shown in the following diagram based on the Microsoft documentation, data can be accessed by HTTP(s), as it is done by web facing applications (Scenario B) as well as using TDS from onsite applications and tools (Scenario A). Ultimately, however, it is TDS + SSL that finally reaches SQL Azure.


    Microsoft data access technologies

     

    There are two ways to interact with data using Visual Studio, either by designing an application using datasets and data adapters, or by performing direct operations on the data source, in this case the SQL Azure database. When and why you use one or the other depends on the situation you are trying to address. In situations where you are performing a database lookup, creating and modifying database structures, such as tables, views, stored procedures, or executing queries to get a single aggregate value, and so on, you directly interact with the database. This is because if you are creating or modifying database objects, you obviously cannot use datasets. On the other hand, if you are trying to access the data on the SQL Azure server from the web, you will be connecting using HTTP/HTTPS or REST-based technologies and will be using datasets and data adapters. The SQL Azure architecture accommodates both types of interaction. In this chapter, we will be looking at direct interaction with the database.

    Using Visual Studio, you use data commands to work directly with the database. The steps to execute commands are as follows:

    1. Create a connection.

    2. Configure a command that uses the connection with a SQL statement or the name of a stored procedure.

    3. Execute the command.

    4. Retrieve the data that the command produces by a data reader.

    Connecting to the database

    In connecting to the SQL Azure database, you can use the following clients that are supported by Visual Studio 2008 by default:

    • SqlConnection
    • OdbcConnection
    • OledbConnection
    • EntityConnection

    Data providers

    When you use the assembly System.Data (C:WindowsMicrosoft.NET Frameworkv2.0.50727System.Data.dll), you can access the namespaces as shown in the following screenshot:

    In order to access the EntityConnection, however, you should reference the System.Data.EntityClient.

    The best way to understand the various namespaces/classes that help in connecting to the SQL Azure is to look at the namespaces/classes in the Object Browser.

    The SqlConnection class member details are easily seen in the Object Browser when you search for SqlConnection.

    Similarly, you search for OdbcConnection in the Object Browser to get all the related members.

    Additionally, you can look up the details for OledbConnection and EntityConnection classes by referencing the corresponding namespaces. It is recommended that you review some of the members that create and manage database objects that we will be using later.

    Connection string

    In order to access the SQL Server you need to know the connection string. The connection string is a list of key/value pairs specific to each type of provider (SqlClient, ODBC, and OLEDB). If you know the connection string, you can directly specify it in the code, as shown here for SqlConnection without waiting for the intellisense to guide you, by typing it as an argument to SqlConnection(). Similar arguments may be made for the other providers.

    This is easily available in the Visual Studio IDE as an intellisense drop-down. Intellisense is a great help in being productive.

    In the following screenshot, the connection string to the SQL Azure is provided but partially hidden in the view. The screenshot shows everything that is accessible to the SqlConnection:

    Following this, you will need to open the connection, which happens to be one of the methods of SqlConnection. The connection is necessary for you to create a command that you can execute.

    Commands

    The commands that we mainly use in SQL Azure are as follows:

    • SqlCommand
    • OleDbCommand
    • OdbcCommand
    • EntityCommand

    Once the connection is established, you can create commands that can be run on the data source by setting up the code to create a command as follows:

    CommandType shown in the previous screenshot is another property associated with commands. You should exercise caution when this property is of type Text. This is where SQL injection attacks take place. A preferred type to deter injection attacks is StoredProcedure or a parameterized query.

    What do the commands accomplish? SQL commands are issued to accomplish several tasks such as the following:

    1. You can execute to return result sets that can be read with an associated reader such as:

    °°SqlDataReader

    °°OleDbDataReader

    °°OdbcDataReader

    °°EntityDataReader

    Search for everything that the SqlDataReader can do in the Object Explorer.

    2. Execute Data Definition Language (DDL) commands to create, edit, and delete tables, stored procedures, and so on, provided you have permissions to do these operations.

    3. Get database information by executing dynamic SQL commands to update, insert, and delete records.

    4. Execute commands to return a single scalar value like an aggregate value.

    5. Execute to return XML values and query the XML code.

    In the following section, you will use connection builders to create tamper-free code that will prevent code injection by external means.

    Using connection string builders

     

    The previous section showed you how to use the connection strings to access the database. However, it is not good practice to send the strings in the form shown, as it can be tampered with. Connection strings must be carefully protected and secured. This is especially true when you are accessing them over the internet. One of the security considerations to prevent SQL injection attacks is to prevent externally injected script getting into the connection string. The injected script, while being syntactically correct, can introduce malicious code. If the connection string is obtained at runtime from user inputs, this is even more important.

    While connecting to SQL Azure over the internet, make sure that the ADO.NET Encrypt (=true) and TrustServerCertificate (=false) connection properties are in place. This will ensure an encrypted connection and prevents the man-in-the-middle attacks (http://msdn.microsoft.com/en-us/library/ff394108.aspx).

    In versions earlier to ADO.NET 3.5, compile-time checking of connection strings, formed by concatenating string values, did not occur, so at runtime, additional values of a malicious nature could be injected (for example, by adding a semi-colon followed by a key value pair). Review this article, for example: http://www. codeproject.com/KB/database/Connection_Strings.aspx.

    Also, different providers supported a different syntax for connection string keywords (Password or PWD, and so on) making it difficult to string keywords manually and validate them. In ADO.NET 2.0, the new connection string builders for each .NET framework provider were introduced. The data providers, since then, included a connection string builder class, which would build a string for only acceptable key values for that provider. This acts as a filter for inserted code, allowing only those acceptable to the provider. You will find the details of these functions in SqlConnectionStringBuilder.

    The different connection string builders trace their roots to the DBConnectionstringBuilder class. The .NET framework Connection String Builder class has the following connection string builders (EntityClient provider was added in Framework 3.5):

    Provider Connection String Builder
    System.Data.SqlClient SqlConnectionStringBuilder
    System.Data.OleDb OleDbConnectionStringBuilder
    System.Data.Odbc OleDbConnectionStringBuilder
    System.Data.OracleClient OracleConnectionStringBuilder
    System.Data.EntityClient EntityConnectionStringBuilder

    In a manner similar to the connection strings, you can get a full appreciation of the properties that these classes support by looking them up in the Object Browser.

    Using the connection builder is a recommended way of forming a connection string, not only for SQL Azure, but also for any place where a connection string is going to be used.

    Accessing SQL Azure data using the Server Management Objects (SMO)

    The Server Management Object model diagram available here: http://msdn. microsoft.com/en-us/library/ms162209.aspx reveals a rich feature set that this model offers to manage the SQL Server. SMO is based on SQL Server Foundation Classes (SFC). This model is based on the SQL Server API and replaces the older SQL-DMO (Distributed Management Objects) and is very comprehensive with many new features announced for SQL Server 2008. With SQL Azure you can use SMO but only a subset of the features is supported. Features like snapshots, trace, replay SQL Server events, service broker, and so on are not supported.

    The SMO model is built on a hierarchy of objects with the server at the very top. The rest of the objects are all instance class objects. Using this model you can program all aspects of the SQL Server. The objects are only loaded when specifically referenced.

    To access a server you need to establish a connection first. This applies to using SMO as well. You create an instance of the server object and establish its connection to an instance of the SQL Server. In the present context, the connection we would like to establish is to the SQL Azure database. Following the creation of a server object, a ServerConnection object is created, a variable that can be used again and again. One difference with the Client APIs described earlier is that it is not necessary to call a Connect method. SMO will automatically connect when required and after the operation it is going to perform is finished, it releases the connection to the pool. It is also possible to call a Non-pooled Connection property of ServerConnection object.

    If you are in doubt at any time using a method or property, make sure you access the SMO in the Object Browser.

    In order to view this in the Object Browser, after adding the three references, right-click Microsoft.SqlServer.Smo and choose View in the Object Browser. The assembly node is then revealed in the Object Explorer. This is because, sometimes, even though the assembly is added to the project in the Solution Explorer, it does not get displayed in the Object Browser.

    We will look at connecting to an SQL Azure database using SMO in this chapter with a practical example.

    Accessing SQL Azure from Visual Studio 2010 Express

     

    Visual Studio 2010 Express is a free program from the Visual Studio suite that can work with SQL Azure. The details of downloading this program are described here: http://hodentek.blogspot.com/2010/06/get-these-web-development-tools-for.html. You may download and install the program in a few easy steps starting from here: http://www.microsoft.com/express/Web/. After installing, you will have a shortcut in Start | All Programs from where you can launch the application. Visual Studio 2010 Express installs a Microsoft Visual Web Developer 2010 Express and a Microsoft Visual Studio 2010 Express for Windows Phone. You will be using the web developer.

    • Launch the application as an administrator (Run as administrator option after a right-click).

    • We will connect to SQL Azure from, for example, a web application.

    • Click on File | New Project and create an ASP.NET Web Application.

    • Change the default name from WebApplication1 to one of your own, say ConSQLAzure.

    • Click the menu item Data and click Add New Data Source.

    • In the New Data Source window, click on the Database icon and click Next>.

    • In the Choose Your Data Connection window, you may make a new connection using the New Connection… button.

    • Click the New Connection… button to display the Add Connection window, as shown in the following screenshot:

     

    The easy way to connect to SQL Azure using ADO.NET 3.5, ODBC, and OLE DB

    You will now see how easy it is to connect to an SQL Azure database using Microsoft client programs such as ADO.NET, ODBC, and OLE DB. In connecting to the database, the single most important item is the connection string. The connection strings for SQL Azure are readily available at the portal as described in Chapter 2, SQL Azure Services. However, we may need to use appropriate arguments while constructing the connection string for OLE DB.

    In the following steps a connection is opened and later closed for the Bluesky database, created in the first chapter with the ADO.NET 3.5, ODBC, and OLE DB.

    • Run Visual Studio 2008 as an administrator from its shortcut in Start | All Programs.

    • From File | New Project… (CTRL + N) |Visual Basic, choose from Visual Studio installed templates; a Windows Forms Application project in the default Framework option (3.5). Change the default name and click on OK. This adds a form Form1.vb and a My Project folder to the project (herein named TestConnect).

    • Drag and drop three buttons on to the Form1.vb, as shown in the following screenshot:

    Using ADO.NET to connect to the SQL Azure database

    The SqlClient is used for establishing an ADO.NET connection to the SQL Azure database as described previously.

    In the following steps, you will be writing the code to one of the form’s button click event that establishes a connection to the SQL Azure database.

    1. To the click event of the button Connect Using ADO.NET, insert the code shown here:

    Imports System.Data.SqlClient

    ‘Imports System.Data.Odbc

    ‘Imports System.Data.OleDb

    Public Class Form1

    Private Sub Button1_Click(ByVal sender As System.Object, _

    ByVal e As System.EventArgs) Handles Button1.Click

    Dim conn As New SqlClient.SqlConnection

    conn.ConnectionString = “Server=tcp:Your SQL Azure Server Name.database.windows.net;” & _

    “Database=Bluesky;User ID=Your User ID@ Your SQL Azure Server Name;Password=Your Password;” & _

    “Trusted_Connection=False;Encrypt=True;”

    conn.Open()

    If conn.State = ConnectionState.Open Then

    MessageBox.Show(“Connection Opened”)

    End If

    conn.Close()

    MessageBox.Show(“Connection Closed”)

    End Sub

    End Class

    The trick to insert the connection string easily without errors, is to copy and paste from the portal.

    2. Open IE browser and log in to your Windows Live ID account. Open http://Sql.Azure.com in your IE Browser. The Your User ID in the portal is the same as the project name in the portal. Click on the Database Name you want to connect to in the portal after highlighting the project.

    3. You should see the tabbed folder with the tabs Databases and Firewall settings.

    4. Click on Connection Strings in the Databases tabbed page and click on Copy to Clipboard, the ADO.NET connection string.

    5. Paste the code into the statement, conn.ConnectionString=” “.

    6. Now replace my password with your password.

    7. Build and run the form.

    8. Verify that you can connect to the database.

    Alternatively, you can also establish an ADO.NET connection to the database concatenating the following parameters:

    Initial Catalog=Bluesky

    Data Source=tcp:Your Server Name.database.windows.net

    User ID=Your Project Name @Your Server Name

    Password=Your Password;

    Trusted_Connection=False

    Encrypt=True

    Using ODBC to connect to the SQL Azure Database

    This is no different from connecting to ADO.NET as shown in the previous steps except that, you must include the imports System.Data.ODBC in your code for the click event of the button Connect Using ODBC. Again, cutting and pasting from the portal is the easiest way.

    From the portal, cut and paste the connection string for ODBC, as described in the previous steps, into the code for the click event of the button marked Connect Using ODBC, as shown here, and change the password:

    ‘Imports System.Data.SqlClient

    Imports System.Data.Odbc

    ‘Imports System.Data.OleDb

    Public Class Form1

    + Private Sub Button1_Click …(shown collapsed)

    Private Sub Button2_Click(ByVal sender As System.Object, _

    ByVal e As System.EventArgs) Handles Button2.Click

    Dim conn As New Odbc.OdbcConnection

    conn.ConnectionString = “Driver={SQL Server Native Client 10.0};” & _

    “Server=tcp:Your server Name.database.windows. net;Database=Bluesky; ” & _

    “Uid=Your Project Name@Your Server Name;Pwd=Your Password;Encrypt=yes;”

    conn.Open()

    If conn.State = ConnectionState.Open Then

    MessageBox.Show(“Connection Opened”)

    End If

    conn.Close()

    MessageBox.Show(“Connection Closed”)

    End Sub

    End Class

    Again, build the project and verify that you can connect to the database.

    Using OLE DB to connect to the SQL Azure database

    Only ADO.NET and ODBC are supported on the SQL Azure platform. However, client connection to the SQL Azure database using OLE DB is possible. You need to construct the correct string that is supported by the OLE DB provider, SQLOLEDB. The connection string parameters are:

    Provider = SQLNCLI10.1;

    Server = tcp:Your Server Name.database.windows.net;

    Database = Bluesky;

    UID= Your Project Name @ Your Server Name;

    Password = Your Password;

    Enter the code to the click event of the button marked Connect Using OLE DB:

    ‘Imports System.Data.SqlClient

    ‘Imports System.Data.Odbc

    Imports System.Data.OleDb

    + Private Sub Button1_Click …(shown collapsed)

    + Private Sub Button2_Click …(shown collapsed)

    Private Sub Button3_Click(ByVal sender As System.Object, _

    ByVal e As System.EventArgs) Handles Button3.Click

    Dim conn As New OleDbConnection

    conn.ConnectionString = “Provider=SQLNCLI10.1;” & _

    “Server=tcp: Your server Name.database.windows.net;” & _

    “Database=Bluesky;” & _

    “UID= Your Project Name@Your Server Name;” & _

    “Password= Your Password;”

    Try

    conn.Open()

    If conn.State = ConnectionState.Open Then

    MessageBox.Show(“Connection Opened”)

    End If

    conn.Close()

    MessageBox.Show(“Connection Closed”)

    Catch ex As OleDb.OleDbException

    MessageBox.Show(ex.Message.ToString)

    Finally

    End Try

    End Sub

    End Class

    Again, verify that you can establish the connection by building the project and running the form.

    In the previous code, a Try…Catch exception handling routine is added, which helps in trapping errors arising, while connecting to the database.

    Using ADO.NET to connect to a SQL Azure database in C#

    Although the code shown earlier is in VB, it could be easily written in C#, as shown here for one of the cases:

    using System;

    using System.Collections.Generic;

    using System.ComponentModel;

    using System.Data;

    using System.Drawing;

    using System.Linq;

    using System.Text;

    using System.Windows.Forms;

    using System.Data.SqlClient;

    namespace TestConCSharp

    {

    public partial class Form1 : Form

    {

    public Form1()

    {

    InitializeComponent();

    }

    private void button1_Click(object sender, EventArgs e)

    {

    SqlConnection con = new SqlConnection();

    con.ConnectionString=”Server=tcp:Your Server Name. database.windows.net;Database=Bluesky;User ID=Your Project Name@ Your Server Name;Password=Your Password;Trusted_Connection=False;Encrypt=T rue;”;

    con.Open();

    if (

    conn.State==ConnectionState.Open

    )

    MessageBox.Show(“Connection Opened”);

    else

    MessageBox.Show(“Connection not open”);

    con.Close();

    MessageBox.Show(“Connection closed”);

    }

    }

    }

     

    Using SQL Server Management Objects (SMO) to connect to SQL Azure

    For working with the SQL Server 2008, Microsoft has provided a collection of namespaces (SMO), which contain different classes, interfaces, and so on that help to programmatically manage the server. We will now use the elements of this namespace to access the SQL Azure server. This is a powerful tool, as it is based on the SQL Server API object model.

    In the following steps, we will create a Windows Forms Application (even a console application can be used) and add references to the Server API that works with SMO and show how a connection can be established to SQL Azure.

    1. Create a Windows Forms Application project (herein SmoSqlAzure) and to the default drag-and-drop a button.

    2. Right-click on the References node and from the drop-down click on Add Reference….

    3. The Add Reference window gets displayed, as shown in the following screenshot:

    4. In the Add Reference window under .NET scroll down and add the following references:

    Microsoft.SqlServer.ConnectionInfo

    Microsoft.SqlServer.Management.sdk.Sfc

    Microsoft.SqlServer.Management.Smo

    5. The project folders in the Solution Explorer should appear, as shown, after the references are added.

    6. To the code page of Form1.vb, add the following code:

    Imports Microsoft.SqlServer.Management.Smo

    Public Class Form1

    Private Sub Button1_Click(ByVal sender As System.Object, _

    ByVal e As System.EventArgs) Handles Button1.Click

    ‘Provide SQL Azure connection Parameters as follows

    ‘Use the ConnectionContext of the server

    Dim srv As New Server(“Your Server Name.database.windows. net”)

    srv.ConnectionContext.LoginSecure = False

    srv.ConnectionContext.Login = “Your ProjectName@Your Server Name”

    srv.ConnectionContext.Password = Your Password”

    srv.ConnectionContext.DatabaseName = “Your Database Name”

    ‘create a strcutured exception block as shown

    Try

    srv.ConnectionContext.Connect()

    Catch ex1 As Exception

    MessageBox.Show(ex1.InnerException.ToString)

    Finally

    MessageBox.Show(srv.ConnectionContext.IsOpen)

    End Try

    If (srv.ConnectionContext.IsOpen) Then

    srv.ConnectionContext.Disconnect()

    MessageBox.Show(srv.ConnectionContext.IsOpen)

    End If

    End Sub

    End Class

    The code shows some of the members of the SMO such as ConnectionContext, IsOpen, Connect, Disconnect, and so on.

    Creating database objects using ADO.NET

    Here, you will now connect to the SQL Azure with your connection string stored in the application’s settings file. You will also create and drop a database, create a table, and populate it, and so on. Although only the code for ADO.NET is demonstrated, ODBC and OLE DB may also be used.

    Using connection string information in application settings

    The very first thing, in this task, is to save the connection string information to the application settings. There are two databases in the SQL Azure server we have been working with, the master database and the database named Bluesky. In the next step, we will store the master database’s connection string to the settings file.

    1. Create a Windows Application project using the Windows Forms Application template and give it a name of your own (herein it is CreateDBObjects).

    2. The program creates a project folder in the Solution Explorer, as shown in the following screenshot:

    3. Add four buttons and a label and configure their visual appearance as follows:

    The buttons are numbered from 1 to 4 and must be associated with the code that follows.

    Inserting connection string information to the application settings file

    In the following steps you will copy the connection string information from the SQL Azure portal and save it in the configuration file of your application.

    1. Copy the connection string from the SQL Azure portal as you have done before (shown here details are masked).

    Server=xxxxxxxxxx.database.windows.net; Database=Bluesky;User ID=yyyyyy@xxxxxxx;Password=myPassword;Trusted_ Connection=False;Encrypt=True;

    2. From the Projects menu click open the projects properties page.

    3. Click on the Settings tab in this window. This opens the Settings page of the application, as shown in the following screenshot:

    4. Set the following values for the four fields in the previous window:

    Name: Give a name (herein, it is mdbs).

    Type: String (no change).

    Scope: Application (choose from drop-down).

    Value: Enter the connection string of the master database you copied from the portal here. Make sure it is all in one line. If necessary, click on the value field, which opens a wider and a longer window.

    5. Build the project. An app.config file gets added to the project.

    6. Click on the app.config file.

    7. The app.config file gets displayed as shown in the follwoing screenshot. Your connection string information will appear in the indicated position between <value> and </value>. Review the file and make sure there are no syntax errors, characters, such as “& _”, which are common line continuation character strings in VB.NET, extra white spaces, and so on.

    Connect to the database on the server using the settings

     

    In this code, you will be connecting to the database using the connection string you saved to the configuration file, in the previous section.

    1. Click on the app.config file.

    2. To the click event in the code page of Form1.vb, add the following code:

    Imports System.Data.SqlClient

    Public Class Form1

    Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click

    GetConnected()

    End Sub

    Private Sub GetConnected()

    ‘Use the connection string in the settings page

    Dim constrg = New CreateDBObjects.My.MySettings

    Dim con As New SqlConnection

    ‘the current connection will use value in the ‘settings

    con.ConnectionString = constrg.mdbs

    con.Open()

    If con.State = ConnectionState.Open Then

    MessageBox.Show(“Connection is open now”)

    End If

    con.Close()

    End Sub

    3. Build the project and run the form.

    4. Verify that you can connect to the SQL Azure server.

    5. The database you will be connecting to will be the database in the connection string you supplied.

    Summary

     

    In this article, the SQL Azure architecture was briefly reviewed. Accessing SQL Azure to manipulate objects, therein using client technologies such as ADO.NET, ODBC, and OLE DB, was described with examples. Connecting to SQL Azure for data manipulation, using Server APIs with examples, was considered using Visual Studio 2008 SP1. Also considered was the best practice of using ConnectionBuilder to thwart SQL injection attacks in detail. Examples of using ADO.NET for creating database objects were detailed using both Client APIs as well as Server APIs. The method to connect to SQL Azure using the entity framework was not described here, but is described with a complete example in next article, Database applications on Windows Azure Platform accessing SQL Server databases.

    Posted in Azure | Tagged: , , , , , , , , , , , , , | Leave a Comment »

    SQL Server Analysis Services partitions – Create and Manage

    Posted by Alin D on May 12, 2011

    Partitions are portions of a SQL Server Analysis Services measure group that hold some or all of the measure group’s data.

    When a measure group is first created, it contains a single partition corresponding to all the data in your fact table or view. Additional partitions need to be created for any measure group with more than 20 million rows.

    Since a majority of corporate databases have far more than 20 million rows in fact tables, you should know how to create partitions and also be aware of good partition design practices.

    ou can define partitions using the Business Intelligence Development Studio (BIDS). On the partitions’ tab within your project, simply click the New Partition link near a measure group to open the Partition Wizard. (I won’t cover the steps of the Partition Wizard here because it is fairly simple to follow).

    An alternative method to creating new partitions is through XMLA scripts, which is what BIDS does behind the scenes.

    You can script an existing partition in SQL Server Management Studio (SSMS) by right-clicking a partition and then choosing Script Partition as CREATE to open a new query window. You will need to edit certain properties such as the partition identifier, its name and the query used for populating the partition.

    Here is a sample XMLA for a partition:

    <Create xmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
    <ParentObject
    >
    < DatabaseID>Adventure Works DW 2008</DatabaseID>
    < CubeID>Adventure Works</CubeID>
    <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
    </ParentObject>
    <ObjectDefinition>
    <Partition xmlns:xsd=http://www.w3.org/2001/XMLSchemaxmlns:xsi=http://www.w3.org/2001/XMLSchema-instancexmlns:ddl2=http://schemas.microsoft.com/analysisservices/2003/engine/2xmlns:ddl2_2=http://schemas.microsoft.com/analysisservices/2003/engine/2/2xmlns:ddl100_100=http://schemas.microsoft.com/analysisservices/2008/engine/100/100>
    <ID>Internet_Sales_2001</ID>
    <Name>Internet_Sales_2001</Name>
    <Source xsi:type=QueryBinding>
    <DataSourceID>Adventure Works DW</</#0000FF>DataSourceID>
    <QueryDefinition>
    SELECT
    [dbo].[FactInternetSales].[ProductKey],
    [dbo].[FactInternetSales].[OrderDateKey],
    [dbo].[FactInternetSales].[DueDateKey],
    [dbo].[FactInternetSales].[ShipDateKey],
    [dbo].[FactInternetSales].[CustomerKey],
    [dbo].[FactInternetSales].[PromotionKey],
    [dbo].[FactInternetSales].[CurrencyKey],
    [dbo].[FactInternetSales].[SalesTerritoryKey],
    [dbo].[FactInternetSales].[SalesOrderNumber],
    [dbo].[FactInternetSales].[SalesOrderLineNumber],
    [dbo].[FactInternetSales].[RevisionNumber],
    [dbo].[FactInternetSales].[OrderQuantity],
    [dbo].[FactInternetSales].[UnitPrice],
    [dbo].[FactInternetSales].[ExtendedAmount],
    [dbo].[FactInternetSales].[UnitPriceDiscountPct],
    [dbo].[FactInternetSales].[DiscountAmount],
    [dbo].[FactInternetSales].[ProductStandardCost],
    [dbo].[FactInternetSales].[TotalProductCost],
    [dbo].[FactInternetSales].[SalesAmount],
    [dbo].[FactInternetSales].[TaxAmt],
    [dbo].[FactInternetSales].[Freight],
    [dbo].[FactInternetSales].[CarrierTrackingNumber],
    [dbo].[FactInternetSales].[CustomerPONumber]
    FROM [dbo].[FactInternetSales]
    WHERE OrderDateKey &lt;= ‘20011231’
    </QueryDefinition>
    </Source>
    <StorageMode>Molap</StorageMode>
    <ProcessingMode>Regular</ProcessingMode>
    <ProactiveCaching>
    <SilenceInterval>-PT1S</SilenceInterval>
    <Latency>-PT1S</Latency>
    <SilenceOverrideInterval>-PT1S</SilenceOverrideInterval>
    <ForceRebuildInterval>-PT1S</ForceRebuildInterval>
    <AggregationStorage>MolapOnly</AggregationStorage>
    <Source xsi:type=ProactiveCachingInheritedBinding>
    <NotificationTechnique>Server</NotificationTechnique>
    </Source>
    </ProactiveCaching>
    <EstimatedRows

    >1013</EstimatedRows>
    <AggregationDesignID>Internet Sales 1</AggregationDesignID>
    </Partition>
    </ObjectDefinition>
    </Create>

    Note that when defining an effective partition, specifying the source of the data it will hold is perhaps the most important part.

    As a rule of thumb, your partitions should contain between five and 20 million rows of fact data. Furthermore, you should avoid partitioning files greater than 500 MB in size. Partition files are in your Analysis Services installation folder under datadatabase_namecube_namemeasure_group_name.

    You can also bind a partition to a table, view or a SQL query. If a relational data warehouse has multiple tables holding fact data, you should bind partitions to such tables as long as each table size is constrained as advised above. If you have a single, large fact table, you could write SQL queries for each Analysis Services partition to retrieve only part of this data.

    Views provide a nice alternative for partition binding, especially when testing cubes. For example, if a fact table has millions of rows, processing is going to take a long time. For testing the solution, you don’t necessarily need to load all the data. Instead, create a view that selects only a portion of rows from the large table(s).

    Later, when you’re ready to deploy your solution into production, alter your partition(s) definition so that they are bound to appropriate tales, queries or views.

    How do you decide what data to include in each partition? SQL Server Analysis Services uses partitions to speed up MDX queries. Each partition contains an XML file that defines the range of dimension member identifiers in a given partition. When an MDX query is submitted, the Analysis Services engine decides what partition files to scan based on the values in each partition’s XML file.

    The XML file is created when you process the partition and the file can be found in each partition folder (the file name is info.xml). Don’t try to edit this file – the dimension key references are internal values that cannot be retrieved from SQL Server Analysis Services.

    If the data requested by an MDX query is spread across all partitions in your measure group then Analysis Services has no choice but to read every single partition. To see how every partition in the measure group is read, record a SQL Profiler trace when you run such a query. If the requested data is contained in a single, small partition, your query will only have to scan a single partition file.

    Reading a single 500 MB file will invariably beat scanning through 200 files of the same size. However, if you have 200 partitions to read, Analysis Services could scan some of them in parallel, and the query won’t necessarily be 200 times slower without proper partitioning.

    For best MDX query performance, you should tailor partition design to the pattern of common queries. Most SQL Server Analysis Services solutions start with measure groups partitioned using a date or periodicity dimension, each partition spanning one month’s or one day’s data.

    This is a reasonable approach if your queries are typically focused on a given month or several months. But what if your queries examine data across all months and are specific to product categories? In that case, partitioning only by month won’t be optimal.

    If you have 10 years worth of data partitioned by month — which is not unusual — each query would have to examine 120 partitions. In this case, query performance could improve if you further partition data by product category dimension.

    For example, dealership sales cube users may only be interested in comparing sales across time for a particular product category – cars, trucks or motorcycles, for example. For such cubes, you could create partitions for each month and each product category.

    Like any other SQL Server Analysis Services object, partitions have a multitude of properties. Perhaps one of the most frequently discussed is partition slice. This property defines a portion of the measure group data that Analysis Services expects to be exposed by the partition.

    Most Analysis Services literature suggests that this property does not have to be set for partitions that use Multidimensional OLAP (MOLAP) storage. While in most situations Analysis Services is smart enough to figure out what dimension members are included in each partition by examining data IDs in info.xml files, to be safe you should always set the partition slice, regardless of the storage mode of your partition.

    Partition slices are defined through MDX. This is an example of what a slice definition would look like for a 2001 partition:

    <Slice>[Date].[Calendar].[Calendar Year]. &amp;[2001] </Slice>

    To further partition data by product categories, a slice definition would look like this:

    <Slice> ([Date].[Calendar].[Calendar Year]. &amp; [2001], [Product].[Product Categories].[Category]. &amp; [1]) </Slice>

    If you don’t specify a slice for a given dimension, SQL Server Analysis Services assumes that any member of that dimension can be found in the partition.

    For example, say you specify a month and product category in partition slice but do not specify the store. Queries that examine sales data by store, but do not include any filters for date or product, may have to search through every partition.

    You can also customize the storage mode for each partition. MOLAP storage mode is optimal for data retrieval — but it copies your relational data. If you prefer to leave your data in the relational format without making its copy, then you can use Relational OLAP (ROLAP) mode for infrequently accessed partitions. For example, most recent partitions can utilize MOLAP storage while historical partitions can use ROLAP.

    SQL Server Analysis Services has an upper limit on the number of partitions — 2^31-1 = 2,147,483,647 — but cubes that have this many partitions are rare. Don’t be afraid to create as many partitions as needed.

    Occasionally, partitions may need to be merged. For example, if a majority of your queries focus on recent data and historical queries are infrequent, you may have a separate partition for each product line and for each day for 30 or 60 days.

    Once the data is stale and seldom accessed, you could combine historical partitions into weekly or monthly partitions. To merge partitions using SSMS, right-click on a partition and choose the Merge Partitions option.

    This is what the XMLA for merging 2001 and 2002 partitions looks like:

    <MergePartitions
    xmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
    <Sources>
    <Source>
    <DatabaseID>Adventure Works DW 2008</DatabaseID>
    <CubeID>Adventure Works</CubeID>
    <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
    <PartitionID>Internet_Sales_2002</PartitionID>
    </Source>
    </Sources>
    <Target>
    <DatabaseID>Adventure Works DW 2008</DatabaseID>
    <CubeID>Adventure Works</CubeID>
    <MeasureGroupID>Fact Internet Sales 1</MeasureGroupID>
    <PartitionID>Internet_Sales_2001</PartitionID>
    </Target>
    </MergePartitions>

    Be aware that you can copy aggregation design from one partition to another, which I will discuss in more detail in a future article. For now know that if you’re happy with your current aggregation design, you can assign it to a newly created or an existing partition.

    If a partition has 500,000 or more estimated rows (you can set estimated numbers of rows in BIDS) and you haven’t defined any aggregations for this partition, then BIDS 2008 warns that your application performance can be improved by adding aggregations.

    Partitions reduce the time it takes to process your measure group because each partition only loads a portion of the entire fact table and view. Remember that during processing, SQL Server Analysis Services modifies the SQL query, defining the partition before sending it over to the relational data source.

    For example, earlier I showed the definition for Internet Sales 2001 partition within the Internet Sales measure group of an Adventure Works 2008 database.

    The query that Analysis Services sends to SQL Server while processing this partition is considerably different from the original query:

    SELECT

    [dbo_FactInternetSales].[dbo_FactInternetSalesSalesAmount0_0] AS[dbo_FactInternetSalesSalesAmount0_0],
    [dbo_FactInternetSales].[dbo_FactInternetSalesOrderQuantity0_1] AS[dbo_FactInternetSalesOrderQuantity0_1],
    [dbo_FactInternetSales].[dbo_FactInternetSalesExtendedAmount0_2] AS[dbo_FactInternetSalesExtendedAmount0_2],
    [dbo_FactInternetSales].[dbo_FactInternetSalesTaxAmt0_3] AS[dbo_FactInternetSalesTaxAmt0_3],
    [dbo_FactInternetSales].[dbo_FactInternetSalesFreight0_4] AS[dbo_FactInternetSalesFreight0_4],
    [dbo_FactInternetSales].[dbo_FactInternetSalesUnitPrice0_5] AS[dbo_FactInternetSalesUnitPrice0_5],
    [dbo_FactInternetSales].[dbo_FactInternetSalesTotalProductCost0_6] AS[dbo_FactInternetSalesTotalProductCost0_6],
    [dbo_FactInternetSales].[dbo_FactInternetSalesProductStandardCost0_7] AS[dbo_FactInternetSalesProductStandardCost0_7],
    [dbo_FactInternetSales].[dbo_FactInternetSales0_8] AS[dbo_FactInternetSales0_8],
    [dbo_FactInternetSales].[dbo_FactInternetSalesPromotionKey0_9] AS[dbo_FactInternetSalesPromotionKey0_9],
    [dbo_FactInternetSales].[dbo_FactInternetSalesSalesTerritoryKey0_10] AS[dbo_FactInternetSalesSalesTerritoryKey0_10],
    [dbo_FactInternetSales].[dbo_FactInternetSalesProductKey0_11] AS[dbo_FactInternetSalesProductKey0_11],
    [dbo_FactInternetSales].[dbo_FactInternetSalesCustomerKey0_12] AS[dbo_FactInternetSalesCustomerKey0_12],
    [dbo_FactInternetSales].[dbo_FactInternetSalesCurrencyKey0_13] AS[dbo_FactInternetSalesCurrencyKey0_13],
    [dbo_FactInternetSales].[dbo_FactInternetSalesOrderDateKey0_14] AS[dbo_FactInternetSalesOrderDateKey0_14],
    [dbo_FactInternetSales].[dbo_FactInternetSalesShipDateKey0_15] AS[dbo_FactInternetSalesShipDateKey0_15],
    [dbo_FactInternetSales].[dbo_FactInternetSalesDueDateKey0_16] AS[dbo_FactInternetSalesDueDateKey0_16]
    FROM
    (
    SELECT
    [SalesAmount] AS [dbo_FactInternetSalesSalesAmount0_0],
    [OrderQuantity] AS [dbo_FactInternetSalesOrderQuantity0_1],
    [ExtendedAmount] AS [dbo_FactInternetSalesExtendedAmount0_2],
    [TaxAmt] AS [dbo_FactInternetSalesTaxAmt0_3],
    [Freight] AS [dbo_FactInternetSalesFreight0_4],
    [UnitPrice] AS [dbo_FactInternetSalesUnitPrice0_5],
    [TotalProductCost] AS [dbo_FactInternetSalesTotalProductCost0_6],
    [ProductStandardCost] AS[dbo_FactInternetSalesProductStandardCost0_7],
    1     AS [dbo_FactInternetSales0_8],
    [PromotionKey] AS [dbo_FactInternetSalesPromotionKey0_9],
    [SalesTerritoryKey] AS [dbo_FactInternetSalesSalesTerritoryKey0_10],
    [ProductKey] AS [dbo_FactInternetSalesProductKey0_11],
    [CustomerKey] AS [dbo_FactInternetSalesCustomerKey0_12],
    [CurrencyKey] AS [dbo_FactInternetSalesCurrencyKey0_13],
    [OrderDateKey] AS [dbo_FactInternetSalesOrderDateKey0_14],
    [ShipDateKey] AS [dbo_FactInternetSalesShipDateKey0_15],
    [DueDateKey] AS [dbo_FactInternetSalesDueDateKey0_16]
    FROM
    (
    SELECT
    [dbo].[FactInternetSales].[ProductKey],
    [dbo].[FactInternetSales].[OrderDateKey],
    [dbo].[FactInternetSales].[DueDateKey],
    [dbo].[FactInternetSales].[ShipDateKey],
    [dbo].[FactInternetSales].[CustomerKey],
    [dbo].[FactInternetSales].[PromotionKey],
    [dbo].[FactInternetSales].[CurrencyKey],
    [dbo].[FactInternetSales].[SalesTerritoryKey],
    [dbo].[FactInternetSales].[SalesOrderNumber],
    [dbo].[FactInternetSales].[SalesOrderLineNumber],
    [dbo].[FactInternetSales].[RevisionNumber],
    [dbo].[FactInternetSales].[OrderQuantity],
    [dbo].[FactInternetSales].[UnitPrice],
    [dbo].[FactInternetSales].[ExtendedAmount],
    [dbo].[FactInternetSales].[UnitPriceDiscountPct],
    [dbo].[FactInternetSales].[DiscountAmount],
    [dbo].[FactInternetSales].[ProductStandardCost],
    [dbo].[FactInternetSales].[TotalProductCost],
    [dbo].[FactInternetSales].[SalesAmount],
    [dbo].[FactInternetSales].[TaxAmt],
    [dbo].[FactInternetSales].[Freight],
    [dbo].[FactInternetSales].[CarrierTrackingNumber],
    [dbo].[FactInternetSales].[CustomerPONumber]
    FROM [dbo].[FactInternetSales]
    WHERE OrderDateKey <= ‘20011231’
    ) AS [FactInternetSales]
    )
    AS [dbo_FactInternetSales]

    Why should you care what query is sent to SQL Server (or another RDBMS) during partition processing? Because any kind of query hints or SET options that may be valid in a regular SQL statement, might not be supported for partition query definition.

    For example, BIDS would allow us to append the SET NOCOUNT ON statement to the beginning of the partition query. If we add this option, however, SQL Server Analysis Services will report a syntax error and fail processing.

    You can customize the partition’s processing mode, which defines whether aggregations are calculated during partition processing or by a lazy aggregation thread after processing is complete.

    Lastly, you could use storage location property to store data in default or alternate location. This property may come in handy if the disk where partition data is normally stored reaches its storage capacity.

    Posted in SQL | Tagged: , , , , , , , , , , , , | Leave a Comment »

    Configuring EdgeSync in Edge Transport Server – Exchange 2010

    Posted by Alin D on May 12, 2011

    In the last part of this series I demonstrated how to install the Edge Transport server role for Exchange Server 2010. In this next part I’ll go through the process of configuring the Edge Subscription between the Edge Transport server and Hub Transport server located in the internal network.

    The Edge Subscription is a relationship between an Edge Transport server and an Active Directory site, and allows the Edge Transport server to receive information about the Exchange organization such as recipients, domain names, and safelists/blocklists for anti-spam.

    This information is synchronized at regular intervals through a process called EdgeSync.

    Firewall Ports for Exchange Server 2010 Edge Transport Servers

    For EdgeSync and mail flow to work there are a few network ports that need to be open on the firewall between the Internet, the Edge Transport server, and the internal Hub Transport server.

    Edge Transport Server Network Ports for EdgeSync

    • Secure LDAP (TCP 50636) from the Hub Transport server to the Edge Transport server

    Edge Transport Server Network Ports for Mail Flow

    • SMTP (TCP 25) from the Internet to the Edge Transport server
    • SMTP (TCP 25) from the Edge Transport server to the Hub Transport server
    • SMTP (TCP 25) from the Hub Transport server to the Edge Transport server
    • DNS (UDP 53) from the Edge Transport server to a DNS server capable of public DNS lookups (ie to look up MX records)


    Configuring ISA Server 2006 for Edge Transport Servers

    If you are using ISA Server 2006 as your firewall and want to create the access rules for the Edge Transport server the first thing you’ll need to configure is a new network protocol for the secure LDAP connection. ISA Server 2006 is pre-configured with a secure LDAP protocol however the EdgeSync process uses the non-standard port of TCP 50636.

    Create a new network protocol named “EdgeSync” for TCP 50636 outbound.


    Configure the ISA Server 2006 firewall policy with access rules for the Edge Transport network access required.


    Creating the Edge Subscription for Exchange Server 2010 Edge Transport Servers

    With the firewall access all configured correctly the next step is to configure the Edge Subscription itself.

    On the Edge Transport server open the Exchange Management Shell and run the following command using the New-EdgeSubscription cmdlet.

    [PS] C:>New-EdgeSubscription -FileName C:edgesubscription.xml

     

    Confirm

    If you create an Edge Subscription, this Edge Transport server will be managed via EdgeSync replication. As a result,

    any of the following objects that were created manually will be deleted: accepted domains, message classifications,

    remote domains, and Send connectors. After creating the Edge Subscription, you must manage these objects from inside

    the organization and allow EdgeSync to update the Edge Transport server. Also, the InternalSMTPServers list of the

    TransportConfig object will be overwritten during the synchronization process.

    EdgeSync requires that this Edge Transport server is able to resolve the FQDN of the Hub Transport servers in the

    Active Directory site to which the Edge Transport server is being subscribed, and those Hub Transport servers be able

    to resolve the FQDN of this Edge Transport server. You should complete the Edge Subscription inside the organization in

    the next “1440” minutes before the bootstrap account expires.

    [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is “Y”): y

    There are two important things to be aware of here:

    • You must complete the next step of the Edge Subscription process within 1440 minutes (24 hours), otherwise you’ll need to generate a new Edge Subscription again
    • The Hub Transport servers in the Active Directory site that will be subscribed must be able to resolve the FQDN of the Edge Transport server. You can either add DNS records manually or use a HOSTS file entry.

    Copy the “edgesubscription.xml” file to the Hub Transport server. Launch the Exchange Management Console and navigate to Organization Management/Hub Transport.

    In the Actions pane click on New Edge Subscription.


     

    Browse and select the Active Directory site to be subscribed, as well as the XML file that you copied from the Edge Transport server.


    Click the New button to complete the wizard.

    After the Edge Subscription has been created you will see two Send Connectors configured for your organization.

     


    It can take up to an hour before the first Edge synchronization process runs, but you can run it manually if you need to. On the Hub Transport server launch the Exchange Management Shell and run the following command using the Start-EdgeSynchronization cmdlet.

    [PS] C:>Start-EdgeSynchronization -Server esp-ho-ex2010a

     

    RunspaceId : b7415ae2-f763-449e-bb36-20a6a18759cd

    Result : Success

    Type : Configuration

    Name : Exchnage-Test

    FailureDetails :

    StartUTC : 5/7/2011 1:27:39 PM

    EndUTC : 5/7/2011 1:28:07 PM

    Added : 290

    Deleted : 0

    Updated : 0

    Scanned : 295

    TargetScanned : 0

     

    RunspaceId : b7415ae2-f763-449e-bb36-20a6a18759cd

    Result : Success

    Type : Recipients

    Name : Exchange-test1

    FailureDetails :

    StartUTC : 5/7/2011 1:27:39 PM

    EndUTC : 5/7/2011 1:28:08 PM

    Added : 401

    Deleted : 0

    Updated : 0

    Scanned : 401

    TargetScanned : 0

    After the initial Edge synchronization has occurred you will be able to see the Send Connectors and Accepted Domains configured on the Edge Transport server.



    Testing Mail Flow

    After the Edge Subscription is in place and you’ve synchronized at least once you can send email between your Exchange organization and an external mailbox, and then inspect the email message headers to verify that the messages are traversing your Edge Transport server.

    Received: from Exchange-test (10.0.3.2) by

    Exchange-test1 (10.0.1.4) with Microsoft SMTP Server

    (TLS) id 14.1.289.1; Sat, 7 May 2011 23:50:10 +1000

    Received: from (192.168.0.45) by Exchange-test

    (10.0.3.2) with Microsoft SMTP Server id 14.1.218.12; Sat, 7 May 2011

    23:50:07 +1000

    MIME-Version: 1.0

    Content-Type: text/plain


    Posted in Exchange | Tagged: , , , , , , | Leave a Comment »

    How to put SQL Server catalog views to work

    Posted by Alin D on January 21, 2011

    SQL Server 2008 provides hundreds of system views that let you view database and server metadata without having to access the actual source of that data. One category of system views that is particularly useful when retrieving information used by the database engine is the catalog view. SQL Server catalog views are a collection of views that provide a general interface to the catalog metadata.

    The amount of information that you can access through catalog views is extensive. For example, use catalog views to retrieve information about the tables and views in a particular database. Or retrieve information about security objects, full-text search, database mirroring, partitions or a variety of other catalog-related information.

    SQL Server catalog views can be divided into subcategories along functional lines, such as database object views and security views. In this article, I provide you with examples of views from several of these subcategories. I created the examples on a local instance of SQL Server 2008. In cases in which a specific database needed to be specified, I used the AdventureWorks 2008 database. Be aware, however, that the results I show returned by the sample queries are specific to my instance of SQL Server. If you try out these examples yourself, you’re likely to see different results.

    For each catalog view I demonstrate (as well as all catalog views in general), SQL Server provides a help topic specific to that view in SQL Server Books Online. Because of the type of information that each view returns, you’ll want to read the details of each view by accessing that topic.

    Database and file catalog views

    The first set of SQL Server catalog views we’ll look at let you access information about the databases installed on an instance of SQL Server and the files associated with those databases. In the following example, I use the sys.databases catalog view to retrieve a list of databases whose names begin with the term “adventureworks”.

    SELECT
    name AS DbName,
    database_id AS DatabaseID,
    user_access AS UserAccess,
    user_access_desc AS AccessDescript
    FROM
    sys.databases
    WHERE
    name LIKE 'adventureworks%';

    As the SELECT statement shows, you access a catalog view as you would any other view. In this case, I’ve used the sys.databases view in the FROM clause to retrieve details about an instance’s databases. For a list of columns supported by the view, along with a description of each column, refer to the view’s topic in SQL Server Books Online. The descriptions include explanations of the codes used as values within the columns. For example, the topic explains the codes that are stored in the user_access column, which shows whether the database permits multi-user access (0), single-user access (1) or restricted-user access (2).

    The SELECT statement above retrieves the name and ID of each “adventureworks” database, along with the user access code and description of the code. The following table shows the results returned by the query when I run it against my local instance of SQL Server 2008:

    DbName DatabaseID UserAccess AccessDescript
    AdventureWorks 7 0 MULTI_USER
    AdventureWorks2008 8 0 MULTI_USER
    AdventureWorksDW2008 10 0 MULTI_USER

    So, I’ve installed three “adventureworks” databases on my system. By default, a user access code of 0 is assigned to each one. The description of the user access code is provided in the user_access_desc column, which indicates that each database has a user access of MULTI_USER.

    SQL Server also provides a catalog view that lets you retrieve information about database files. In the following SELECT statement, I use the sys.master_files view to access details about the files associated with the AdventureWorks 2008 database:

    SELECT
    file_id AS FileID,
    name AS LogicalName,
    type AS FileType,
    type_desc AS TypeDescript
    FROM
    sys.master_files
    WHERE
    database_id IN
    (
    SELECT database_id
    FROM sys.databases
    WHERE name = 'AdventureWorks2008'
    );

    For each file associated with the AdventureWorks 2008 database, I retrieve the file ID, the file’s logical name, the file type code, and the description that explains the type code. Again, reference the topic associated with the sys.master_files catalog view for details about the columns.

    Notice that, in order to retrieve the correct data, my WHERE clause includes a subquery that retrieves the database ID from the sys.databases view. This allows me to access the file information without knowing the ID, only the database name.

    Once I retrieve the database ID, I can limit my results to information about a specific database. The following results show the information returned about the AdventureWorks 2008 database:

    FileID LogicalName FileType TypeDescript
    1 AdventureWorks2008_Data 0 ROWS
    2 AdventureWorks2008_Log 1 LOG
    65537 FileStreamDocuments 2 FILESTREAM

    Another SQL Server catalog view is sys.database_files. This view is similar to the sys.master_files view, except that it’s specific to a database. In the following example, I first specify a USE statement that targets the AdventureWorks 2008 database, and then I run a SELECT statement that retrieves data from the sys.database_files view:

    USE AdventureWorks2008;
    GO
    SELECT
    file_id AS FileID,
    name AS FIleName,
    type_desc AS TypeDescript,
    state_desc AS StateDescript,
    size AS Size
    FROM
    sys.database_files;

    In this case, I retrieve the file ID, logical file name, file type, current state, and size. The following results show the information returned by the statement (on my system):

    FileID FileName TypeDescript StateDescript Size
    1 AdventureWorks2008_Data ROWS ONLINE 24928
    2 AdventureWorks2008_Log LOG ONLINE 256
    65537 FileStreamDocuments FILESTREAM ONLINE 0

    As the results indicate, there are three files associated with the AdventureWorks 2008 database. By using the sys.database_files catalog view, I’m able to easily access data on each one. Get more details about this and other database views in “Databases and Files Catalog Views (Transact-SQL)” in SQL Server Books Online and then selecting the specific view.

    Object Catalog Views

    The next subcategory of SQL Server catalog views is object views. As the name suggests, these views return information about a database’s objects. One of the most useful of these views is sys.objects, which returns information about the user-defined, schema-scoped objects within a specified database. For example, in the following SELECT statement, I use the sys.objects view to retrieve a list of unique constraints in the AdventureWorks database:

    USE AdventureWorks2008;
    GO
    SELECT
    a.name AS ObjectName,
    (
    SELECT b.name
    FROM sys.objects b
    WHERE b.object_id = a.parent_object_id
    ) AS ParentTable
    FROM
    sys.objects a
    WHERE
    a.type = 'UQ';

    In the SELECT statement, I retrieve the name of the unique constraints and the name of the parent objects (tables) associated with those constraints. To retrieve the name of the table, I create a subquery that associates the object ID of the parent to a regular object ID. (Both parent and child objects are available through the sys.objects view.)

    I also include a WHERE clause that returns only rows whose type value is UQ. The UQ code stands for unique constraint. As the following results indicate, the AdventureWorks 2008 database contains only one unique constraint, which is defined on the document table:

    ObjectName ParentTable
    UQ__Document__F73921F730F848ED Document

    Another useful view in the object subcategory is sys.tables, which returns a list of user-defined tables configured in a database. The following SELECT statement uses this view to retrieve a list of tables in the HumanResources schema of the AdventureWorks 2008 database:

    USE AdventureWorks2008;
    GO
    SELECT
    name AS TableName,
    is_replicated AS IsReplicated,
    is_tracked_by_cdc AS IsTracked
    FROM
    sys.tables
    WHERE
    schema_id IN
    (
    SELECT schema_id
    FROM sys.schemas
    WHERE name = 'HumanResources'
    );

    As you can see, I retrieve the name of the tables along with whether they’re replicated or tracked by Change Data Capture. Notice that in order limit the results to the HumanResources schema I include a subquery in the WHERE clause that retrieves the schema ID based on the schema name. The following table shows the results returned through the sys.tables view:

    TableName IsReplicated IsTracked
    Department 0 0
    Employee 0 0
    EmployeeDepartmentHistory 0 0
    EmployeePayHistory 0 0
    JobCandidate 0 0
    Shift 0 0

    As the results indicate, there are six tables in the HumanResources schema. The is_replicated and is_tracked_by_cdc columns are both configured as bit columns in which 0 means false and 1 means true. That means none of the tables are replicated or tracked by Change Data Capture.

    The next example is similar to the last except that I use the sys.procedures view to retrieve a list of procedures contained in the HumanResources schema:

    USEAdventureWorks2008;
    GO
    SELECT
    name AS ProcName,
    type AS ProcType,
    type_desc AS ProcDescript
    FROM
    sys.procedures
    WHERE
    schema_id IN
    (
    SELECT schema_id
    FROM sys.schemas
    WHERE name = 'HumanResources'
    );

    In this case, I retrieve the name and type of procedure (both the procedure code and description). As before, I include a subquery in the WHERE clause to limit the results to the specified schema. The SELECT statement returns the results shown in the following table:

    ProcName ProcType ProcDescript
    uspUpdateEmployeeHireInfo P SQL_STORED_PROCEDURE
    uspUpdateEmployeeLogin P SQL_STORED_PROCEDURE
    uspUpdateEmployeePersonalInfo P SQL_STORED_PROCEDURE

    As you can see, the results include only SQL stored procedures. However, the sys.procedures view will also return Common Language Runtime stored procedures, extended stored procedures, and replication-filter procedures if any exist in the database.

    If you want to retrieve information about columns in tables or views, use the sys.columns view. In the following example, I retrieve the names of the schema, table and columns for each view in the AdventureWorks 2008 database that includes the term “history” within its name:

    USEAdventureWorks2008;

    GO
    SELECT
    s.name AS SchemaName,
    o.name AS ViewName,
    c.name AS ColumnName
    FROM
    sys.columns c
    INNER JOIN sys.objects o
    ON c.object_id = o.object_id
    INNER JOIN sys.schemas s
    ON o.schema_id = s.schema_id
    WHERE
    o.type'V' AND
    o.name LIKE '%history%'
    ORDER BY
    s.name, o.name, c.name;

    To pull the names of the views and schemas, I join the sys.columns view to the sys.objects and sys.schemas views. I then use the WHERE clause to limit the results to type V objects (views) with the term “history” in the object name. As it turns out, the AdventureWorks 2008 database includes only one view that includes the term “history,” as shown in the following results:

    SchemaName ViewName ColumnName
    HumanResources vEmployeeDepartmentHistory BusinessEntityID
    HumanResources vEmployeeDepartmentHistory Department
    HumanResources vEmployeeDepartmentHistory EndDate
    HumanResources vEmployeeDepartmentHistory FirstName
    HumanResources vEmployeeDepartmentHistory GroupName
    HumanResources vEmployeeDepartmentHistory LastName
    HumanResources vEmployeeDepartmentHistory MiddleName
    HumanResources vEmployeeDepartmentHistory Shift
    HumanResources vEmployeeDepartmentHistory StartDate
    HumanResources vEmployeeDepartmentHistory Suffix
    HumanResources vEmployeeDepartmentHistory Title

    SQL Server also provides a catalog view that lets you view the indexes configured in a database. For example, in the following SELECT statement, I use the sys.indexes view to retrieve all the indexes defined on the Person table in the AdventureWorks 2008 database:

    USEAdventureWorks2008;
    GO
    SELECT
    name AS IndexName,
    type_desc AS IndexType,
    is_primary_key AS PrimaryKey,
    is_unique AS UniqueIndex
    FROM
    sys.indexes
    WHERE
    object_id IN
    (
    SELECT object_id
    FROM sys.objects
    WHERE name = 'person'
    );

    In the case, I retrieve and name and type of each index in the Person table, along with whether the index is the primary key or a unique index. To limit the results to the Person table, I include a subquery in the WHERE clause to retrieve the object ID associated with the Person table. The following results show the indexes defined on that table:

    IndexName IndexType PrimaryKey UniqueIndex
    PK_Person_BusinessEntityID CLUSTERED 1 1
    IX_Person_LastName_FirstName_MiddleName NONCLUSTERED 0 0
    AK_Person_rowguid NONCLUSTERED 0 1
    PXML_Person_AddContact XML 0 0
    PXML_Person_Demographics XML 0 0
    XMLPATH_Person_Demographics XML 0 0
    XMLPROPERTY_Person_Demographics XML 0 0
    XMLVALUE_Person_Demographics XML 0 0

    The examples I’ve shown here are only some of the object catalog views that SQL Server supports. There are also object views related to assembly modules, computed columns, events, identity columns, triggers, synonyms and other object types. For a list of all object views, see the topic “Object Catalog Views (Transact-SQL)” in SQL Server Books Online.

    Security Catalog Views

    Another important subcategory of catalog views is the security views, which let you view information about database security, server security, encryption, and audits. For example, you can use the sys.database_principals view to retrieve information about each security principal in a database. In the following SELECT statement, I use this view to retrieve the type and name of each principal in the AdventureWorks 2008 database:

    USE AdventureWorks2008;
    GO
    SELECT
    type_desc AS PrincipalType,
    name AS PrincipalName
    FROM
    sys.database_principals
    ORDER BY
    type_desc, name;

    As you can see, this statement is very straightforward. I simply retrieve the type and name and order the results accordingly, as shown in following table:

    PrincipalType PrincipalName
    DATABASE_ROLE db_accessadmin
    DATABASE_ROLE db_backupoperator
    DATABASE_ROLE db_datareader
    DATABASE_ROLE db_datawriter
    DATABASE_ROLE db_ddladmin
    DATABASE_ROLE db_denydatareader
    DATABASE_ROLE db_denydatawriter
    DATABASE_ROLE db_owner
    DATABASE_ROLE db_securityadmin
    DATABASE_ROLE public
    SQL_USER guest
    SQL_USER INFORMATION_SCHEMA
    SQL_USER sys
    WINDOWS_USER dbo

    Another security view specific to databases is the sys.database_permissions view. As you would expect, this view returns information about a database’s permissions. For instance, in the following SELECT statement, I use the sys.database_permissions view to retrieve a count of the number of SELECT permissions that have been granted on the AdventureWorks 2008 database:

    USEAdventureWorks2008;
    GO
    SELECT
    class_desc AS ClassName,
    permission_name AS PermissionName,
    state_desc AS StateName,
    COUNT(*) AS NumberOfSelect
    FROM
    sys.database_permissions
    WHERE
    type = 'SL' AND
    state = 'G'
    GROUP BY
    class_desc,
    permission_name,
    state_desc;

    In this statement, I group the results by class, permission type, and state in order to arrive at the count. I use the WHERE clause to limit the permission type to SELECT (SL) and the state to GRANT (G). As the following results show, 129 SELECT permissions have been granted on the AdventureWorks 2008 database:

    ClassName PermissionName StateName NumberOfSelect
    OBJECT_OR_COLUMN SELECT GRANT 129

    You can also use security views to retrieve information at the server level. For instance, in the following example I use the sys.server_principals view to retrieve the type and name of the principals on my local instance of SQL Server:

    SELECT
    type_desc AS PrincipalType,
    name AS PrincipalName
    FROM
    sys.server_principals
    WHERE
    type <> 'C'
    ORDER BY
    type_desc, name;

    Notice in this statement that I use the WHERE clause to exclude type C from the results. Type C refers to certificate mapped logins, which I do not want to include. The following results show the information returned by the sys.server_principals view:

    PrincipalType PrincipalName
    SERVER_ROLE bulkadmin
    SERVER_ROLE dbcreator
    SERVER_ROLE diskadmin
    SERVER_ROLE processadmin
    SERVER_ROLE public
    SERVER_ROLE securityadmin
    SERVER_ROLE serveradmin
    SERVER_ROLE setupadmin
    SERVER_ROLE sysadmin
    SQL_LOGIN ##MS_PolicyEventProcessingLogin##
    SQL_LOGIN ##MS_PolicyTsqlExecutionLogin##
    SQL_LOGIN sa
    WINDOWS_LOGIN BOBE024Administrator
    WINDOWS_LOGIN NT AUTHORITYLOCAL SERVICE
    WINDOWS_LOGIN NT AUTHORITYSYSTEM

    You can also view details about server permissions by using the sys.server_permissions view, as I do in the following example:

    SELECT
    class_desc AS ClassName,
    permission_name AS PermissionName,
    state_desc AS StateName,
    COUNT(*) AS NumberOfGrant
    FROM
    sys.server_permissions
    WHERE
    type = 'CO' AND
    state = 'G'
    GROUP BY
    class_desc,
    permission_name,
    state_desc;

    In this statement, I’m retrieving the number of CONNECT permissions that have been granted on my local instance of SQL Server. As you saw in a previous example, I’m grouping the data by class, permission type, and state in order to arrive at the results, which are shown in the following table:

    ClassName PermissionName StateName NumberOfGrant
    ENDPOINT CONNECT GRANT 4

    In addition to the security views I’ve shown you here, SQL Server supports a number of others. See a list of those views in “System Views (Transact-SQL)” in SQL Server Books Online.

    Moving Forward

    In this article, I’ve introduced you to several subcategories of catalog views, which themselves are a category of system views. The subcategories I’ve demonstrated here are only a portion of those supported by SQL Server. Find a list of all the subcategories in the topic “Catalog Views (Transact-SQL)” in SQL Server Books Online. From there, link to a list of views in each subcategory, and from there to the individual views. As I stated earlier, when working with catalog views, I recommend that you look closely at their associated topics in SQL Server Books Online for specific details about the type of information that each view returns. The more familiar you become with these views, the more valuable a resource they’ll become.

    Posted in SQL | Tagged: , , , , , , | Leave a Comment »

    Setup FTP 7.5 on Windows Server 2008 and publish through Forefront TMG 2010

    Posted by Alin D on November 2, 2010

    Introduction

    Microsoft has created a new FTP service that has been completely rewritten for Windows Server® 2008. This new FTP service incorporates many new features that enable web authors to publish content better than before, and offers web administrators more security and deployment options.

    • Integration with IIS 7: IIS 7 has a brand-new administration interface and configuration store, and the new FTP service is tightly integrated with this new design. The old IIS 6.0 metabase is gone, and a new configuration store that is based on the .NET XML-based *.config format has taken its place. In addition, IIS 7 has a new administration tool, and the new FTP server plugs seamlessly into that paradigm.
    • Support for new Internet standards: One of the most significant features in the new FTP server is support for FTP over SSL. The new FTP server also supports other Internet improvements such as UTF8 and IPv6.
    • Shared hosting improvements: By fully integrating into IIS 7, the new FTP server makes it possible to host FTP and Web content from the same site by simply adding an FTP binding to an existing Web site. In addition, the FTP server now has virtual host name support, making it possible to host multiple FTP sites on the same IP address. The new FTP server also has improved user isolation, now making it possible to isolate users through per-user virtual directories.
    • Custom authentication providers: The new FTP server supports authentication using non-Windows accounts for IIS Managers and .NET Membership.
    • Improved logging support: FTP logging has been enhanced to include all FTP-related traffic, unique tracking for FTP sessions, FTP sub-statuses, additional detail fields in FTP logs, and much more.
    • New supportability features: IIS 7 has a new option to display detailed error messages for local users, and the FTP server supports this by providing detailed error responses when logging on locally to an FTP server. The FTP server also logs detailed information using Event Tracing for Windows (ETW), which provides additional detailed information for troubleshooting.
    • Extensible feature set: FTP supports extensibility that allows you to extend the built-in functionality that ships with the FTP service. More specifically, there is support for creating your own authentication and authorization providers. You can also create providers for custom FTP logging and for determining the home directory information for your FTP users.

    Additional information about new features in FTP 7.5 is available in the “What’s New for Microsoft and FTP 7.5?” topic on Microsoft’s http://www.iis.net/ web site.

    This document will walk you through installing the new FTP service and troubleshooting installation issues.

    Installing FTP for IIS 7.5

    IIS 7.5 for Windows Server 2008 R2

    1. On the taskbar, click Start, point to Administrative Tools, and then click Server Manager.
    2. In the Server Manager hierarchy pane, expand Roles, and then click Web Server (IIS).
    3. In the Web Server (IIS) pane, scroll to the Role Services section, and then click Add Role Services.
    4. On the Select Role Services page of the Add Role Services Wizard, expand FTP Server.
    5. Select FTP Service. (Note: To support ASP.NET Membership or IIS Manager authentication for the FTP service, you will also need to select FTP Extensibility.)
    6. Click Next.
    7. On the Confirm Installation Selections page, click Install.
    8. On the Results page, click Close.

    Installing FTP for IIS 7.0

    Prerequisites

    The following items are required to complete the procedures in this section:

    1. You must be using Windows Server 2008.
    2. Internet Information Services 7.0 must be installed.
    3. If you are going to manage the new FTP server by using the IIS 7.0 user interface, the administration tool will need to be installed.
    4. You must install the new FTP server as an administrator. (See the Downloading and Installing section for more.)
    5. IIS 7.0 supports a shared configuration environment, which must be disabled on each server in a web farm before installing the new FTP server for each node. Note: Shared configuration can be re-enabled after the FTP server had been installed.
    6. The FTP server that is shipped on the Windows Server 2008 DVD must be uninstalled before installing the new FTP server.
    Downloading the right version for your server

    There are two separate downloadable packages for the new FTP server; you will need to download the appropriate package for your version of Windows Server 2008:

    Launching the installation package

    You will need to run the installation package as an administrator. This can be accomplished by one of the following methods:

    1. Logging in to your server using the actual account named “Administrator”, then browsing to the download pages listed above or double-clicking the download package if you have saved it to your server.
    2. Logging on using an account with administrator privileges and opening a command-prompt by right-clicking the Command Prompt menu item that is located in the Accessories menu for Windows programs and selecting “Run as administrator”, then typing the appropriate command listed below for your version of Windows to run the installation:
      • 32-bit Windows Versions:
        • msiexec /i FTP 7_x86_75.msi
      • 64-bit Windows Versions:
        • msiexec /i FTP 7_x64_75.msi

    Note: One of the above steps is required because the User Account Control (UAC) security component in the Windows Vista and Windows Server 2008 operating systems prevents access to your applicationHost.config file. For more information about UAC, please see the following documentation:

    The following steps walk you through all of the required settings to add FTP publishing for the Default Web Site.

    Walking through the installation process
    1. When the installation package opens, you should see the following screen. Click Next to continue.
      alt
    2. On the next screen, click the I accept check box if you agree to the license terms, and then click Next.
      alt
    3. The following screen lists the installation options. Choose which options you want installed from the list, and then click Next.
      • Common files: this option includes the schema file. When installing in a shared server environment, each server in the web farm will need to have this option installed.
      • FTP Publishing Service: this option includes the core components of the FTP service. This option is required for the FTP service to be installed on the server.
      • Managed Code Support: this is an optional component, but features that use managed extensibility require this option before using them, such as ASP.NET and IIS manager authentication. Note: This feature cannot be installed on Windows Server 2008 Core.
      • Administration Features: this option installs the FTP 7 management user interface. This requires the IIS 7.0 manager and .NET framework 2.0 to be installed. Note: This feature cannot be installed on Windows Server 2008 Core.
        alt
    4. On the following screen, click Install to begin installing the options that you chose on the previous screen.
      alt
    5. When installation has completed, click Read notes to view the FTP README file, or click Finish to close the installation dialog.
      alt

    Note: If an error occurs during installation, you will see an error dialog. Refer to the Troubleshooting Installation Issues section of this document for more information.

    Troubleshooting Installation Issues

    When the installation of FTP 7 fails for some reason, you should see a dialog with a button called “Installation log”. Clicking the “Installation log” button will open the MSI installation log that was created during the installation. You can also manually enable installation logging by running the appropriate command listed below for your version of Windows. This will create a log file that will contain information about the installation process:

    • 32-bit Windows Versions:
      • msiexec /L FTP 7.log /I FTP 7_x86_75.msi
    • 64-bit Windows Versions:
      • msiexec /L FTP 7.log /I FTP 7_x64_75.msi

    You can analyze this log file after a failed installation to help determine the cause of the failure.

    Clicking the “Online information” button on the error dialog will launch the “Installing and Troubleshooting FTP 7.5” document in your web browser.

    Note: If you attempt to install the downloaded package on an unsupported platform, the following dialog will be displayed:

    Known Issues in This Release

    The following issues are known to exist in this release:

    1. While Web-based features can be delegated to remote managers and added to web.config files using the new IIS 7 configuration infrastructure, FTP features cannot be delegated or stored in web.config files.
    2. The icon of a combined Web/FTP site may be marked with a question mark even though the site is currently started with no error. This occurs when a site has a mixture of HTTP/FTP bindings.
    3. After adding an FTP publishing to a Web site, clicking the site’s node in the tree view of the IIS 7 management tool may not display the FTP icons. To work around this issue, use one of the following:
      • Hit F5 to refresh the IIS 7 management tool.
      • Click on the Sites node, then double-click on the site name.
      • Close and re-open the IIS 7 management tool.
    4. When you add a custom provider in the site defaults, it shows up under each site. However, if you attempt to remove or modify the settings for a custom provider at the site-level, IIS creates an empty <providers /> section for the site, but the resulting configuration for each site does not change. For example, if the custom provider is enabled in the site defaults, you cannot disable it at the site-level. To work around this problem, open your applicationHost.config file as an administrator and add a <clear/> element to the list of custom authentication providers, the manually add the custom provider to your settings. For example, in order to add the IIS Manager custom authentication provider, you would add settings like the following example:
      <ftpServer>
      <security>
      <authentication>
      <customAuthentication>
      <providers>
      <clear />
      <add name=”IisManagerAuth” enabled=”true” />
      </providers>
      </customAuthentication>
      </authentication>
      </security>
      </ftpServer>
    5. The following issues are specific to the IIS 7.0 release:
      • The FTP service that is shipped on the Windows Server 2008 DVD should not be installed after the new FTP service has been installed. The old FTP service does not detect that the new FTP service has been installed, and running both FTP services at the same may cause port conflicts.
      • IIS 7 can be uninstalled after the new FTP service has been installed, and this will cause the new FTP service to fail. If IIS is reinstalled, new copies of the IIS configuration files will be created and the new FTP service will continue to fail because the configuration information for the new FTP service is no longer in the IIS configuration files. To fix this problem, re-run the setup for the new FTP service and choose “Repair”.

    To Add FTP Site from the IIS management Console

    Creating a New FTP Site Using IIS 7 Manager

    The new FTP service makes it easy to create new FTP sites by providing you with a wizard that walks you through all of the required steps to create a new FTP site from scratch.

    Step 1: Use the FTP Site Wizard to Create an FTP Site

    In this first step you will create a new FTP site that anonymous users can open.

    Note: The settings listed in this walkthrough specify “%SYSTEMDRIVE%inetpubftproot” as the path to your FTP site. You are not required to use this path; however, if you change the location for your site you will have to change the site-related paths that are used throughout this walkthrough.

    1. Open IIS 7 Manager. In the Connections pane, click the Sites node in the tree.
    2. As shown in the image below, right-click the Sites node in the tree and click Add FTP Site, or click Add FTP Site in the Actions pane.
      • Create a folder at “%SystemDrive%inetpubftproot”
      • Set the permissions to allow anonymous access:
        1. Open a command prompt.
        2. Type the following command:
          ICACLS "%SystemDrive%inetpubftproot" /Grant IUSR:R /T
        3. Close the command prompt.

      alt

    3. When the Add FTP Site wizard appears:
      • Enter “My New FTP Site” in the FTP site name box, then navigate to the %SystemDrive%inetpubftproot folder that you created in the Prerequisites section. Note that if you choose to type in the path to your content folder, you can use environment variables in your paths.
      • When you have completed these items, click Next.

      alt

    4. On the next page of the wizard:
      • Choose an IP address for your FTP site from the IP Address drop-down, or choose to accept the default selection of “All Unassigned.” Because you will be using the administrator account later in this walk-through, you must ensure that you restrict access to the server and enter the local loopback IP address for your computer by typing “127.0.0.1” in the IP Address box. (Note: If you are using IPv6, you should also add the IPv6 localhost binding of “::1”.)
      • Enter the TCP/IP port for the FTP site in the Port box. For this walk-through, choose to accept the default port of 21.
      • For this walk- through, do not use a host name, so make sure that the Virtual Host box is blank.
      • Make sure that the Certificates drop-down is set to “Not Selected” and that the Allow SSL option is selected.
      • When you have completed these items, click Next.

      alt

    5. On the next page of the wizard:
      • Select Anonymous for the Authentication settings.
      • For the Authorization settings, choose “Anonymous users” from the Allow access to drop-down, and select Read for the Permissions option.
      • When you have completed these items, click Finish.

      alt

    Summary

    You have successfully created a new FTP site using the new FTP service. To recap the items that you completed in this step:

    1. You created a new FTP site named “My New FTP Site”, with the site’s content root at “%SystemDrive%inetpubftproot”.
    2. You bound the FTP site to the local loopback address for your computer on port 21, and you chose not to use Secure Sockets Layer (SSL) for the FTP site.
    3. You created a default rule for the FTP site to allow anonymous users “Read” access to the files.
    Step 2: Adding Additional FTP Security Settings

    Creating a new FTP site that anonymous users can browse is useful for public download sites, but web authoring is equally important. In this step, you add additional authentication and authorization settings for the administrator account. To do so, follow these steps:

    1. In IIS 7 Manager, click the node for the FTP site that you created earlier, then double-click FTP Authentication to open the FTP authentication feature page.
      alt
    2. When the FTP Authentication page displays, highlight Basic Authentication and then click Enable in the Actions pane.
      alt
    3. In IIS 7 Manager, click the node for the FTP site to re-display the icons for all of the FTP features.
    4. You must add an authorization rule so that the administrator can log in. To do so, double-click the FTP Authorization Rules icon to open the FTP authorization rules feature page.
      alt
    5. When the FTP Authorization Rules page is displayed, click Add Allow Rule in the Actions pane.
      alt
    6. When the Add Allow Authorization Rule dialog box displays:
      • Select Specified users, then type “administrator” in the box.
      • For Permissions, select both Read and Write.
      • When you have completed these items, click OK.
        alt
    Summary

    To recap the items that you completed in this step:

    1. You added Basic authentication to the FTP site.
    2. You added an authorization rule that allows the administrator account both “Read” and “Write” permissions for the FTP site.
    Step 3: Logging in to Your FTP Site

    In Step 1, you created an FTP site that anonymous users can access, and in Step 2 you added additional security settings that allow an administrator to log in. In this step, you log in anonymously using your administrator account.

    Note: In this step log in to your FTP site using the local administrator account. When creating the FTP site in Step 1 you bound the FTP site to the local loopback IP address. If you did not use the local loopback address, use SSL to protect your account settings. If you prefer to use a separate user account instead of the administrator account, set the correct permissions for that user account for the appropriate folders.

    Logging in to your FTP site anonymously
    1. On your FTP server, open a command prompt session.
    2. Type the following command to connect to your FTP server:FTP localhost
    3. When prompted for a user name, enter “anonymous”.
    4. When prompted for a password, enter your email address.

    You should now be logged in to your FTP site anonymously. Based on the authorization rule that you added in Step 1, you should only have Read access to the content folder.

    Logging in to your FTP site using your administrator account
    1. On your FTP server, open a command prompt session.
    2. Type the following command to connect to your FTP server:FTP localhost
    3. When prompted for a user name, enter “administrator”.
    4. When prompted for a password, enter your administrator password.

    You should now be logged in to your FTP site as the local administrator. Based on the authorization rule that you added in Step 2 you should have both Read and Write access to the content folder.

    Summary

    To recap the items that you completed in this step:

    1. You logged in to your FTP site anonymously.
    2. You logged in to your FTP site as the local administrator.

    Publish FTP site from Forefront TMG 2010

    Let’s begin

    Note:
    Keep in mind that the information in this article is based on a release candidate version of Microsoft Forefront TMG and is subject to change.

    A few months ago, Microsoft released RC 1 (Release Candidate) of Microsoft Forefront TMG (Threat Management Gateway), which has a lot of new exciting features.

    One of the new features of Forefront TMG is its ability to allow FTP server traffic through the Firewall in both directions. It does this in the form of Firewall access rules for outbound FTP access and with server publishing rules for inbound FTP access through a published FTP Server. This server is located in your internal network or a perimeter network, also known as a DMZ (if you are not using public IP addresses for the FTP Server in the DMZ).

    First, I will show you the steps you will need to follow in order to create a Firewall rule which will allow FTP access for outgoing connections through TMG.

    FTP access rule

    Create a new access rule which allows the FTP protocol for your clients. If you want to allow FTP access for your clients, the clients must be Secure NAT or TMG clients, also known as the Firewall client in previous versions of Forefront TMG.

    Please note:
    If you are using the Web proxy client, you should note that through this type of client only FTP read-only access is possible and you cannot use a classic FTP client for FTP access, only a web browser FTP access is possible with some limitations.

    The following picture shows a FTP access rule.

    alt
    Figure 1: FTP access rule

    A well-known pitfall beginning with ISA Server 2004 is, that by default, after the FTP access rule has been created, the rule only allows FTP read-only access for security purposes in order to prevent users from uploading confidential data outside the organization without permission. If you want to enable FTP uploads you have to right click on the FTP access rule, and then click Configure FTP.

    alt
    Figure 2: Configure FTP

    All you have to do is remove the read only flag, wait for the new FTP connection to be established, and the users get all the necessary permissions to carry out FTP uploads.

    alt
    Figure 3: Allow write access through TMG

    FTP Server publishing

    If you want to allow incoming FTP connections to your internal FTP servers, or to FTP servers located in the DMZ, you have to create server publishing rules if the network relationship between the external and the internal/DMZ network is NAT. If you are using a route network relationship, it is possible to use Firewall rules to allow FTP access.

    To gain access to an FTP server in your internal network, create an FTP server publishing rule.

    Simply start the new Server Publishing Rule Wizard and follow the instructions.

    As the protocol you have to select the FTP Server protocol definition which allows inbound FTP access.

    alt
    Figure 4: Publish the FTP-Server protocol

    The standard FTP Server protocol definiton uses the associated standard protocol which can be used for inspection by NIS, if a NIS signature is available.

    alt
    Figure 5: FTP-Server protocol properties

    The Standard FTP Server protocol definition allows FTP Port 21 TCP for inbound access and the protocol definition is bound to the FTP access filter which is responsible for the FTP protocol port handling (FTP Data and FTP control port).

    alt
    Figure 6: FTP ports and FTP Access Filter binding

    Active FTP

    One of the changes in Microsoft Forefront TMG is that the Firewall does not allow Active FTP connections by default anymore, for security reasons. You have to manually allow the use of Active FTP connections. It is possible to enable this feature in the properties of the FTP access filter. Navigate to the system node in the TMG management console, select the Applicaton Filters tab, select the FTP Access filter and in the task pane click Configure Selected Filter (Figure 7).

    alt
    Figure 7: FTP Access filter properties

    In the FTP access filter properties select the FTP Properties tab and enable the checkbox Allow Active FTP Access and save the configuration to the TMG storage.

    alt
    Figure 8: Allow Active FTP through TMG

    FTP alerts

    Forefront TMG comes with a lot of predefined alert settings for several components and events. One of them is the alert function for the FTP Filter Initialization Warning. This alert informs Administrator when the FTP filter failed to parse the allowed FTP commands.

    alt
    Figure 9: Configure FTP alert options

    The alert actions are almost the same as in ISA Server 2006, so there are no new things to explain for experienced ISA Administrators.

    Conclusion

    In this article, I showed you some ways to allow FTP access through the TMG Server. There are some pitfalls for a successful FTP implementation. One of the pitfalls is that since the introduction of ISA Server 2004, allowing FTP write access through the Firewall and the other pitfall is new to Forefront TMG. Forefront TMG does not allow Active Mode FTP connections by default, so you have to manually activate this feature if you really need this type of special configuration.

    Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

    Using SSIS Logging to Resolve Runtime Errors and Performance Issues

    Posted by Alin D on October 28, 2010

    Runtime errors and performance issues can be difficult to identify and resolve. One of the primary methods that assist with their resolution involves generating logs, which contain records of events taking place during code execution. This article provides a comprehensive overview of logging options available in SQL Server 2008 R2 Integration Services.

    In general, software programming presents a unique set of challenges when it comes to troubleshooting coding mistakes. While problems surfacing during design stage and resulting from syntax or type conversion errors are relatively easy to detect and correct (this can be attributed to guidance incorporated into development tools, such as Business Intelligence Development Studio used to implement SQL Server 2008 Integration Services packages), runtime errors and performance issues are considerably more difficult to identify and resolve. One of primary methods that assist with their resolution involves generating logs, which contain records of events taking place during code execution. In this article, we will provide a comprehensive overview of logging options available in SQL Server 2008 R2 Integration Services.

    An overwhelming majority of SSIS log entries represent events raised by packages and their components (you can find out more about them from our earlier article, “SSIS Events and Handlers“). Their creation is handled by log providers, which record arbitrarily chosen events in a designated target location, such as a file, a table hosted in a SQL Server database, a SQL Server Profiler trace, or the Windows Application Log. You have an option of either taking advantage of existing log providers or developing custom ones. We will focus here on the first of these two categories, which include the following provider types:

    • SSIS log provider for Text files – produces output in the comma-separated values (CSV) format, stored in an arbitrarily chosen file (via a file connection manager), which can afterwards be easily imported into Excel for further analysis.
    • SSIS log provider for XML files – yields an XML-formatted file (which implies its dependency on a file connection manager), making it suitable for a review with help of any XML-compatible viewer (and easily presentable in HTML-based reports).
    • SSIS log provider for SQL Server – records event entries in the sysssislog table of msdb database (rather than sysdtslog90 used in SQL Server 2005 – although schemas in both cases are identical) by leveraging sp_ssis_addlogentry stored procedure. In order to be able to accomplish this, the provider requires an OLE DB connection manager.
    • SSIS log provider for SQL Server Profiler – stores event data in *.trc files (which explains its reliance on a file connection manager) that employ SQL Profiler-specific format. This type of provider is intended primarily for troubleshooting performance issues, allowing you to correlate SQL Server-specific operations traditionally recorded in SQL Profiler traces with corresponding SSIS events.

    SSIS log provider for Windows Event Log – dumps events in the Windows Application Log. For example, invoking package execution is recorded as Event ID 12556 and its completion is represented by the Event ID 12557 (both are easily identifiable by their SQLISPackage100 source)

    It is worth noting that all providers that you assign to the package and its components share the same collection of events that are to be recorded along with specific data they should contain. As you might expect, at a minimum, you can choose from the set of events common to all SSIS components (which, as we described in more detail earlier, consist of On Error, OnInformation, OnWarning, OnPreValidate, OnPostValidate, OnPreExecute, OnPostExecute, OnExecStatusChanged, OnVariableValueChanged, OnProgress, OnQueryCancel, and OnTaskFailed). However, your selection is likely to be much wider, since it includes events specific to each component present in the package. In addition, you will also find an entry labeled Diagnostic, which is intended (based on enhancements introduced in Service Pack 2 of SQL Server 2008 Integration Services) for detailed troubleshooting of connectivity issues (it should be disabled otherwise due to the high volume of generated events, which are likely to have a negative performance impact). As indicated above, each log provider (with the exception of Windows Event Log Provider, which dumps its output directly to the Application Event Log) requires a corresponding connection manager defined as part of its configuration.

    For each event type, you have the ability to specify the following individual pieces of data that should be logged.

    • Computer (identifying the name of the system on which the recorded event took place)
    • Operator (indicating the name of the user who invoked package execution)
    • SourceName (providing the name of an executable, such as a task, container, or a package where the event originated)
    • SourceID (Globally Unique Identifier – or simply GUID – matching ID property of the executable and assigned to it at its creation)
    • ExecutionID (a unique identifier assigned to each package execution instance, allowing you to differentiate between two distinct runs)
    • MessageText (providing a description associated with the event)
    • DataBytes (containing log payload revealing auxiliary data about the event).

    Each executable can be configured independently in regard to events and their details that should be recorded during its runtime. Effectively, you have the ability to log a particular event that takes place on the package level, but filter it out for each of its children components (and vice versa). In addition, it is possible to enforce different levels of detail to be recorded in each of these scenarios (excluding individual pieces of data if they are not relevant in a given context).

    In order to better understand dependencies between different logging settings, let’s review a sequence of steps necessary to configure them. Start by opening an SSIS package in the Designer interface of Business Intelligence Development Studio. Right-click on the empty area of its Control Flow tab and select the Logging… entry from its context sensitive menu (alternatively, you can choose the same item from the SSIS top-level menu). The resulting Configure SSIS Logs dialog box is divided in two sections. The left one, labeled Containers lists hierarchical structure of containers, with checkboxes next to each. The one on the right hand side is divided into two tabs – Providers and Logs and Details. The purpose of the first of them is to assign and configure a log provider that will be used to collect data during package execution. To accomplish this, highlight the top-level node in the Containers window (which represents the package), select one of five entries listed in the Provider type listbox (on the right) and click on the Add… command button. As the result, the selected provider will appear in the area labeled Select the logs to use for the container directly underneath. If your choice happened to be SSIS log provider for Windows Event Log, you can proceed to the next step. Otherwise, click on the listbox appearing in the Configuration column and either pick an existing connection or define a new one (via Configure OLE DB Connection Manager Editor in case of SSIS log provider for SQL Server or File Connection Manager Editor for all remaining types). Note that it is possible to create multiple log providers of the same type, which allows you to log events to multiple destinations.

    With intended log providers and their corresponding connection managers in place, switch to the Details tab and select events to be logged. Turning on advanced view (by clicking on the Advanced>> command button), will additionally allow you to specify pieces of data you are interested in (such as Computer, Operator, SourceName, SourceID, ExecutionID, MessageText, and DataBytes). In essence, this gives you the ability to choose specific events along with individual data for each by marking on or off checkboxes in a table-like structure (where rows represent event types and individual pieces of data appear as columns).

    Logging on the package level can be either enabled or disabled, however child containers have three possible settings. The third one, represented by the grayed checkbox, indicates that logging configuration is inherited from the parent (you also have the ability to determine the resulting value for a given container by checking whether its LoggingMode property is set to Enabled, Disabled, or UseParentSetting). This is convenient if you want to enforce consistency across all executables (since it precludes changing them in all children).

    Once filtering settings are configured, you have an option to evaluate whether they yield the desired level of logging without generating (and reviewing) actual logs. Instead, prior to invoking package execution, activate the Log Events window (accessible via View->Other Windows menu) in the Designer interface of Business Intelligence Development Studio (to accomplish the same when invoking package execution via the DTExec command line utility, take advantage of its /ConsoleLog switch). Content of the window will reflect assigned settings even if log providers have been temporarily disabled.

    If you want to apply the same configuration to multiple packages, use the Save… command button in the Configure SSIS Logs dialog box (while the top node is highlighted in its Containers section). This will allow you to store current logging settings in an XML formatted file, which you can subsequently apply to another package loaded into Business Intelligence Development Studio (via the Load… command button in the same dialog box).

    Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , | Leave a Comment »

    SSIS Script Task

    Posted by Alin D on October 27, 2010

    SQL Server 2008 Integration Services contains an assortment of predefined Control Flow tasks designed to carry out specific types of actions. Collectively, they cover a wide range of scenarios; however, their specialized nature sometimes turns out to be overly restrictive. This article explains how to accommodate these custom requirements by employing the considerably more flexible Script Task.

    SQL Server 2008 Integration Services contains a diverse assortment of predefined Control Flow tasks, which are designed to carry out specific types of actions. While collectively they cover a wide range of scenarios involving data extraction, transformation, and loading, their specialized nature sometimes turns out to be overly restrictive. In situations like these, it is usually possible to accommodate custom requirements by employing the considerably more flexible Script Task. In this article, we will cover its most relevant features (it is important to note that its purpose is quite different from the Data Flow-based Script Component, whose characteristics we are also planning to present on this forum).

    It is hard to overstate the flexibility of the Script Task, considering that boundaries of its capabilities are defined primarily by your ingenuity, skills, and .NET programming model (starting with SQL Server 2008 Integration Services, it became possible to use Microsoft Visual C#, in addition to Microsoft Visual Basic available in its predecessor). The task operates essentially as a wrapper of managed code with access to SSIS-specific objects, including their methods and properties, interacting with a parent container (and other SSIS entities) through arbitrarily chosen System and User variables. Its modifications are handled using Visual Studio Tools for Applications (VSTA) replacing Visual Studio for Applications (VSA) present in earlier versions (which, incidentally, was the primary obstacle to providing support for Visual C#). The VSTA interface offers visual enhancements standard in practically every current software development environment such as color-coding or IntelliSense as well as debugging functionality including breakpoints (which integrate with breakpoint indicators of SSIS tasks and containers) or Immediate and Output windows. Furthermore, it simplifies referencing Dynamic Linked Libraries as well as making it possible to directly reference Web services and COM modules, eliminating the need for the creation of proxy classes and interop assemblies or for copying them into Global Assembly Cache and Microsoft.NETFramework folders (which used to be the case when working with VSA). The resulting code is precompiled into binary format (VSA was more flexible in this regard, giving you an option to postpone compilation until execution), effectively shortening total runtime of the package (although at the cost of its overall size).

    In order to identify the most relevant characteristics of Script Task, let’s examine in more detail its interface exposed in the Business Intelligence Development Studio. Create a new project based on the Integration Services template, add the task to its Designer window (by dragging its icon from the Toolbox), and display its Editor dialog box (by selecting Edit entry from its context sensitive menu), which is divided into three sections:

    • Script section – containing the following elements:
      • ScriptLanguage textbox – designates the programming language (Microsoft Visual Basic 2008 or Microsoft Visual C# 2008) in which code contained within the task is written. Make sure to choose the intended entry before you activate the Visual Studio Tools for Applications interface (by clicking on the Edit Script… command button), since at that point, you will no longer be able to alter your selection (this action triggers auto generation of the ScriptMain class based on a built-in template using the language of your choosing).
      • EntryPoint textbox – identifies a method (which must be defined as part of the ScriptMain class in the VSTA-based project) that is invoked when the Script Task executes (set by default to Main)
      • ReadOnlyVariables and ReadWriteVariables textboxes – determines which SSIS variables will be accessible from within the script by listing their names (as comma-separated entries in the format namespace::variable name). While it is possible to type them in, the most straightforward (and the least error prone – since they are case sensitive) approach involves pointing them out directly in the Select Variables dialog box accessible via the ellipsis (...) command button located next to each textbox. Another approach to specifying SSIS variables that can be either viewed or modified within a Script Task leverages LockForRead and GetVariables methods of VariableDispenser object (we will explore it in more detail in our future articles), however it is important to realize that these methods are mutually exclusive (an attempt to reference the same variable using both will result in an error).
      • Edit Script… command button – triggers display of Microsoft Visual Studio Tools for Applications 2.0 Integrated Development Environment with the majority of its desktop estate occupied by the window containing the task code. In addition, you will also find Properties and Project Explorer windows, where the latter references the newly created Script Task via automatically generated identifier (which guarantees its uniqueness and therefore should not be changed). Effectively, content of the task constitutes a separate project, with its own set of properties (independent from characteristics of the SSIS project it is part of) accessible via its context sensitive menu and displayed in the tabbed window divided into the following sections:
        • Application – designates properties of the assembly (some of which, such as output file, name, and root namespace are derived from the auto generated identifier of the script task). In general, settings on this page are intended to create independent assemblies via a full-fledged edition of Visual Studio and therefore are not relevant in the context of our presentation (as a matter of fact, most of them are grayed out because of their read-only status), although you have the ability to customize Assembly Information (including such parameters as Title, Description, Company, Product, Copyright, Trademark, Assembly Version, File Version, GUID, or Neutral Language) as well as Make assembly COM-Visible.
        • Compile – allows you to modify the Build output path (by default pointing to bin subfolder), Compile option (such as Option explicit, Option strict, Option compare, and Option infer), a number of Warning configurations, settings such as Disable all warnings, Treat all warnings as errors, Generate XML documentation file, or Register for COM interop, as well as a number of Advanced Compile Options (for example, defining DEBUG, TRACE, or custom constants).
        • Debug – provides the ability to assign Command line arguments and Working directory for debug start options.
        • References – likely the most helpful feature available via the Properties window, considerably simplifies adding assembly references to your projects (replacing cumbersome process implemented in previous version of Visual Studio for Applications of SQL Server 2005 Integration Services) as well as identifying and removing unused ones.
        • Resources – facilitates management of project resources, such as strings or bitmap, icon, audio, and text files. This functionality is intended for development of standalone programs and is not applicable here.
        • Settings – defines project’s application settings and similar to the Resources page, contains entries pertinent to development of independent applications.
        • Signing – provides the ability to sign assemblies (which requires a strong name key file), which is not relevant in this context.

    • General section – containing the following entries:
      • Name – allows customizing the name of the Script Task (which happens to be its default) in order to improve readability of your package. Note that its value must be unique within a package.
      • Description – intended for additional information helping you make packages self-documenting.

    • Expressions section – provides the ability to assign values of Script Task properties , by dynamically evaluating associated expressions (rather than specifying their values directly).

    Posted in SQL | Tagged: , , , , , , , , , , , , , | Leave a Comment »