Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘database server’

Install WSUS server on Hyper-V virtual machine

Posted by Alin D on June 27, 2012

As organizations continue to move away from the use of physical servers, a frequent question arises:   Is it a good idea to virtualize WSUS servers?  Short answer: yes. Read on to find out how to run WSUS in a Hyper-V machine.

Will WSUS run in a virtual machine?

In a word, yes. If you plan on hosting a WSUS virtual machine on Hyper-V, it is generally recommended that you run WSUS on top of the Windows Server 2008 R2 operating system. In order to do that, you will have to deploy WSUS 3 SP2. Until SP2, WSUS did not work properly with Windows Server 2008 R2, and it did not support the management of Windows 7 clients.

What is the easiest way to virtualize a WSUS server?

If you are currently running WSUS 3 on a physical server then I would recommend doing a migration upgrade. To do so, set up a virtualized WSUS server and then configure it to be a replica of your physical WSUS server and then perform synchronization. Once the sync process completes reconfigure the virtual WSUS server to be autonomous. Then, you can decommission your physical WSUS server.

This technique offers two main advantages. First, it makes it easy to upgrade the WSUS server’s operating system if necessary. The other advantage is that this method offers far less down time than a standard P2V conversion because your physical WSUS server continues to service users while your virtual WSUS server is being put into place.

What kind of capacity can I get from a virtualized WSUS server?

A single WSUS server should be able to handle up to 25,000 clients. However, this assumes that sufficient resources have been provisioned and that SQL Server is running on a separate server (physical or virtual). Some organizations have been able to achieve higher capacities by using multiple front-end servers.

What are the options for making WSUS fault-tolerant?

In a physical server environment, WSUS is made fault-tolerant by eliminating any single points of failure. Normally you would create a Network Load Balancing (NLB) cluster to provide high availability for your WSUS servers. Of course WSUS is dependent on SQL Server and the preferred method for making SQL Server fault-tolerant is to build a failover SQL Server cluster.

While it is possible to recreate this high-availability architecture in a Hyper-V infrastructure, it is usually considered to be a better practice to build a Hyper-V cluster instead.  If your host servers are clustered then clustering your WSUS servers and your SQL servers becomes unnecessary (at least from a fault tolerance standpoint).

If Hyper-V hosts are not clustered (and building a Hyper-V cluster is not an option for whatever reason) then I would recommend going ahead and creating a clustered architecture for the virtualized WSUS and SQL servers. However, you should make sure not to place multiple WSUS or SQL servers onto a common Hyper-V server because doing so will undermine the benefits of clustering WSUS and SQL Server.

What do I need in terms of network bandwidth?

There are no predetermined rules for providing network bandwidth to a virtualized WSUS server. Keep in mind, however, that there are a number of different issues that can occur as a result of insufficient bandwidth. If at all possible, I would recommend dedicating a physical network adapter to your virtual WSUS server. If you are forced to share a network adapter across multiple virtual servers then use network monitoring tools to verify that the physical network connection isn’t saturated.

If saturation becomes an issue, remember that WSUS can be throttled either at the server itself or at the client level through the use of group policy settings. You can find client throttling policies in the Group Policy Object Editor at Computer Configuration> Administrative Templates > Network > Background Intelligent Transfer Service.

Are there any special considerations for the SQL database?

It is generally recommended to run SQL Server on a separate machine (physical or virtual) so that you can allocate resources directly to the database server. I also recommend running the Cleanup Wizard and defragmenting the database every couple of months. Doing so will help the database to run optimally, which is important in a virtualized environment.

Another thing to keep in mind is that SQL Servers tend to be I/O intensive. Therefore, if you are planning to virtualize your SQL server then you might consider using dedicated physical storage so that the I/O load generated by SQL does not impact other virtual machines.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Best practice for SSRS deployment

Posted by Alin D on June 22, 2011

While SQL Server Reporting Services (SSRS) platform is not difficult to learn and work with, it is still a fairly complex technology. Successful utilization of SSRS requires a combination of database, administration, report building and data analysis skills. Such a combination of expertise is often hard to put together, especially in smaller companies where one person might wear many hats.

As a consultant, I have seen several SQL Server Reporting Services deployments that could have benefited from a few simple SSRS best practices. Here are a few.

Back up the encryption key.

SSRS uses encryption to protect sensitive data in its configuration. Things like connection strings and passwords are stored in the back-end ReportServer database and in the configuration files. They are encrypted using an encryption key that’s stored in SSRS. If you move SSRS to another server, you need to use the same encryption key to decrypt all encrypted data. Therefore, proper encryption key management is extremely important.

When you install SSRS, the first thing you should do is use the Reporting Services Configuration Manager and back up the encryption key to a password-protected file. Keep a copy of this key file on the SSRS server and also in a safe spot somewhere on the network. If you ever need to migrate SSRS to another server, you can use the same configuration manager to restore the key from the original server. Otherwise, you will have to manually re-create all your data sources and other encrypted content. That’s not something you want to do, especially if your SSRS server is not functional and you are quickly trying to bring up SSRS on another server. Even though Microsoft has emphasized the importance of keeping a backup of the encryption key, I still sometimes find myself at a client site and discover that the key isn’t backed up.

Use Windows Active Directory groups to control security.

Systems administrators have long been following the practice of creating Windows groups and granting privileges to the group instead of assigning privileges to individual user accounts. This practice makes a lot of sense, since you can easily add or remove users from a group and make your security management much easier. But I don’t see this practice as widely used among developers and database administrators. I’ve seen many SSRS installations where whoever was managing privileges assigned individuals access to reports or report folders instead of creating groups like Marketing or Management to simplify administration.

Use report folders to control security.

Just as it makes sense to utilize Windows groups instead of user accounts, you’ll gain a similar advantage by managing security at the folder level. Group your reports into logical groups, place them in a report folder and then assign privileges to the folder rather than to individual reports. SSRS also allows you to cascade privileges to the subfolders so you can design a hierarchy of privileges in which higher privilege groups can view all folders, while other groups can view only reports closer to the root folder.

Use saved authentication when configuring report data sources.

While using Windows Authentication is often the most recommended option, it doesn’t always work well in SSRS. If you configure a report to use Windows Authentication to connect to a SQL Server database, it only works if the database is on the same server as the SSRS server. But if you need to connect to another physical server, a “double-hop” authentication is needed — one hop between the browser and SSRS and the other hop between SSRS and the database server. I had to troubleshoot this issue when a report was working while the user was using a browser on the SSRS server but stopped working when SSRS was accessed from another machine, resulting in double-hop authentication. Theoretically, double-hop authentication should work if you properly configure the authentication protocol Kerberos on the network, but I haven’t seen much success in that area. You are better off configuring a data source to use a SQL Authentication login, or specify a Windows account that should be used to connect to SQL Server.

Back up the SSRS back-end databases.

SSRS uses ReportServer and ReportServerTempDB databases, and you should back those up to a location other than the SQL Server machine they run on. You will need them if your server dies and you need to re-create the SSRS environment; otherwise you will have to redeploy all your reports and redo all configurations. I’ve seen companies making backups to a local drive, but if you lose the whole machine, those will do you no good.

Practice SSRS migration to another server.

Migrating SSRS to another server is relatively simple: Back up the ReportServer and ReportServerTempDB databases and the encryption keys. Next, restore them on another SQL Server and configure the new SSRS server to use them. Once you restore the encryption key, your new SSRS environment should be identical. This is a good exercise, because if your SSRS server ever dies, you will be able to bring a new server online much faster.

Keep all reports under source control.

Very often, a company has several people developing reports and deploying them to the server without having a central location to store the files and keep them versioned. Developers are used to working with source control software such as SourceSafe or SVN, but business users are not used to them. Since they often build and deploy reports, they should use the same procedure and discipline to check new reports into a source control and check them out if they need to make modifications. Aside from having your reports in a central place, where they are versioned and backed up, you’ll find it much easier to build a new SSRS environment, pulling the reports from source control as opposed to collecting the report definition files from the individuals who developed each report.

While the SSRS best practices in this article are intuitive and easy to implement, not every company has them in place. I highly recommend that you check your SSRS configuration and make the recommended configurations. In addition, remember to back up the keys, the databases and practice migrating to another server. After all that work, your SSRS administration will require less time, and you will be better prepared to deal with an unexpected migration to a new SSRS server.

Posted in SQL | Tagged: , , , , , , | 1 Comment »

Best practices for SQL Clustering

Posted by Alin D on June 8, 2011

SQL Server clustering is a high-availability technology for SQL Server instances. It involves the sharing of server resources between one or more nodes (or servers), which have one or more shared disks grouped into logical units called resource groups. A resource group containing at least one IP address, network name and disk resource is called a virtual server. The cluster service arbitrates ownership of the resource groups. A single node can own a resource group and its associated resources at any given time.

Clustering basics

Each virtual server appears on the network as a complete system. When the virtual server contains SQL Server resources, clients connected to the virtual server access resources on its current host node. While the terms “active” and “passive” are often used in this context, they are not fixed roles, as all nodes in a cluster are interchangeable. Should the current host, sometimes designated as the primary, fail, the resource group will be transferred to another node (secondary node) in the cluster. With clusters having more than two nodes or two instances, it is important to set failover order by choosing the preferred node ownership order for each instance. The secondary will become the primary and host the virtual server. Active client connections will be broken during failover, but they can reconnect to the virtual server now hosted by the new node. The clients will have to reconnect manually, and work in progress will be lost during the failover. Most commercial applications now handle this reconnection task seamlessly.

The goal of clustering is to provide increased availability to clients by having a hot standby system with an automatic failover mechanism. SQL Server clustering is not a load-sharing or scale-out technology. On all clusters during a failure there will be a brief database server interruption. On large clusters with multiple nodes and instances, clients may experience degraded performance during a failure event but they will not lose database availability.

Clustering topologies

There are four types of cluster topologies — or arrangements of nodes in a cluster:

  • Single instance
  • Multi-instance
  • N+1
  • N+M

Single instance:

In this case, one node in a cluster owns all resource groups at any one time and the other nodes are offline. Should the primary node owning the resources fail, the resource groups will be transferred to the secondary node, which comes online. While the secondary node comes online, it will assume ownership of the resource groups, which typically consist of disks containing your database files and transaction logs. The secondary node comes online (starts up), and SQL Server will start up on the virtual server and roll uncommitted transactions in the transaction log backward or forward as it recovers the database.

This topology was formerly called active-passive. Single-instance clustering is most frequently used for mission-critical applications, where the cost of downtime far outweighs the cost of the wasted hardware resources of the secondary node sitting idle while offline.

Multiple instance:

In this situation, one virtual server in a cluster owns some of the resource groups and another virtual server owns other resource groups. At any one time, the virtual servers themselves can be hosted by a single node or different nodes and would appear to clients as named instances of a single server. In that case, they are named instances of a virtual server, hence the name multiple instance. With multiple-instance clustering, previously called active-active, the hardware requirements of each individual node are greater as each node may at any one time be hosting two (or more) virtual servers.

You should consider multiple-instance clusters to be more cost effective than single-instance clusters as there are no nodes offline or waiting. However, should one node host more than one virtual server, performance for clients is typically degraded. Your best bet is to use multiple instances when you require high availability but not high performance.

N+1:

This is a modification of multiple-instance clustering topologies where two or more nodes share the same failover node. The secondary node will need enough hardware capabilities to support the load of all N servers at any one time should they all fail over simultaneously. You can achieve cost savings if multiple clusters use the same failover node. However, the cost of an individual

node tends to be small in comparison to other related clustering costs, such as storage.

Many people consider N+1 to be more cost effective than multiple-instance clustering because there is only one secondary node offline (or waiting) for several active nodes. However, depending on the hardware configuration of the failover node, it does not offer the performance of multiple-instance clustering. Use N+1 in environments where cost constraints force you to reduce the number of failover nodes and you need high availability but not high performance.

N+M:

In a situation where you have two or more working nodes in a cluster along with two or more standby nodes, it is typically configured in eight-node clusters with six working nodes for every two standby, or five working nodes for every three standby.

N+M offers some of the cost benefits of N+1, but it has a lower chance of performance degradation during a multiple failure event than N+1 since the failover node(s) do not have to support the entire load of the failed nodes. Use N+M in environments where cost constraints force you to reduce the number of failover nodes and at the same time provide a high level of performance.

Clustering dependencies

SQL Server clustering has several dependencies:

  • Network
  • Hardware
  • Software

Network dependencies:

Clustering requires a private network among all nodes in a cluster. Clustering services use a private communication channel on each node to keep in sync with each other. This allows the cluster to communicate and act appropriately even if the public network is offline. Looks-Alive and Is-Alive checks — used by cluster services to determine if a cluster resource group is “up” — connect over the public networks to best emulate a client connection process.

Hardware dependencies:

Clustering requires specialized hardware and software. And to share resources between nodes, you need specialized disk controllers. Clustering hardware must be certified by Microsoft to meet the requirements of clustering. And, you must have a second set of network cards to provide the private network between cluster nodes.

Software dependencies:

To benefit from clustering services, you need specialized versions of the operating system (Windows 2000 and 2003 Enterprise or Data Center editions). You will also need SQL Server 2000 Enterprise Edition, SQL Server 2005 Standard Edition (up to two nodes) or SQL Server 2005 Enterprise Edition (up to eight nodes).

Clustering best practices

What follows is a list of clustering best practices. I have broken these down according to dependencies.

Network best practices

There are two different and contradictory settings required for the public network and the private network in clustering.

Private

Ensure the private network is private. Clustering requires a 150-ms ping response time. If your private network is saturated or congested with other network traffic, you may find your clusters failing over unexpectedly. On your private network, consider isolating traffic by implementing a VLAN (virtual LAN), a separate subnet or use a crossover cable for Single-instance clusters. The actual traffic generated by cluster communication is small, so high-bandwidth networks are unnecessary. However, they must still be low latency and reliable. Make sure the following points are established on the private network:

  • Use TCP/IP as the only protocol bound to the NIC.
  • No default gateway is configured.
  • No DNS servers are configured unless the cluster nodes are DNS servers, in which case 127.0.0.1 should be configured.
  • No DNS registration or DNS suffix is configured.
  • No WINS servers are configured.
  • Static IP addresses are used for all nodes.
  • NetBIOS over TCP/IP is disabled.
  • No NIC teaming is used, where two network interface cards are aggregated together to act as a single NIC card.

Public

For your public network, use at least two WINS or DNS servers on your cluster network segment or VLAN. While installing your cluster you will have to resolve cluster, DC (domain controller) and virtual server names. You must have a name server on your network for this. You can decrease the time required for a node to fail over by providing a name server on your network as well.

Use at least two DCs on your network. Clustering requires DCs not only during setup but also for normal functioning and failover.

If you use NIC teaming for greater bandwidth throughput and reliability, do not configure it while building the cluster. Add NIC teaming as a last step before final testing. Be prepared to “undo” NIC teaming as an early step in troubleshooting. Microsoft Customer Support Services (CSS) will likely direct you to disable teaming as a first diagnostic step, so be ready.

Both

Ensure that your network card settings are identical for every server in your cluster and that they are not configured to automatically detect network settings.

Software best practices

Ensure applications are cluster aware and will not lose work or fail to meet the SLA during a cluster failover.

Ensure transactions are as small as possible in your application and on any jobs that may run on your clustered SQL Servers. Long-running transactions increase the length of time required to apply the transaction log on the failover node and consequently increase the amount of time for failover.

Do not run antivirus software on cluster nodes. If you must run antivirus software, be sure the quorum disk and database files are excluded from the scans. Even in this configuration, there have been reports of antivirus drivers interfering with cluster disk resource failover. Test your setup and make sure it fails as expected. Select another antivirus product if yours causes problems.

Make sure there are no password expiration policies in use for any of the cluster-related accounts. Cluster accounts should:

  • be the same for all nodes in the cluster;
  • have domain accounts (but not domain admin accounts) and have local administrative rights on each node in the cluster. SQL Server 2005 forces you to set up domain-level groups for these accounts and then grants appropriate rights to the groups.
  • have the least security privileges to minimize damage that could be done to the node or other servers on your network should the password be compromised or the account be hijacked by a buffer overflow.

Ensure all software components are the same version (i.e., SQL Server 2005 Standard), same architecture (i.e., 64 bit for all OS and SQL Server components) and at the same service pack and hot fix level. The exception is that individual SQL Server instances can be at different releases, editions and hotfix levels.

Ensure all external software dependencies (COM components, file paths, binaries) are either cluster aware or installed on all nodes in a cluster. MSDTC (Microsoft Distributed Transaction Coordinator) is the most common external dependency in a cluster. While it is not necessary, many people install it before installing SQL Server because installing it later is much harder.

When installing a cluster, consider installing a single-node cluster and adding nodes to the cluster as required. This way, if the cluster setup fails while adding a single node, you are left with a working cluster (although it could be a single-node cluster).

While applying hot fixes or service packs that require a system reboot, apply it to the primary (current instance host), fail over to the secondary, reboot the primary, fail back to the primary and reboot the secondary. Typically hot fixes and service packs are cluster aware and install on all cluster nodes simultaneously.

Hardware

Ensure that your cluster is approved by the vendor and that it is part of the Microsoft Windows Catalog with a specific endorsement for clustering.

Ensure each node in your cluster has identical hardware and components.

Regularly check vendor Web sites for potential hardware problems, fixes and BIOS patches for each component in your cluster.

Use the appropriate RAID technology to ensure that your disk array is fault tolerant. Be as proactive as possible in replacing failed or marginal disks. A disk failure will put a greater load on the remaining disks in an array and may cause other marginal disks to fail. Depending on your RAID technology, your RAID array may not be tolerant to more than one disk failure per array.

Ensure you have properly conditioned or charged batteries on any array controlle. It prevents data loss or corruption in the event of a power failure.

Use uninterrupted power supplies and be sure you have redundancy in your power supplies.

Use Hot-Add Memory if it’s supported by your SQL Server version, operating system and hardware. Hot-Add Memory is a hardware technology that allows you to add memory to a running system; the OS detects and uses the additional memory. Windows Server 2003, Enterprise and Data Center Editions, as well as SQL Server 2005 Enterprise Edition can take advantage of Hot-Add Memory. Read about Hot-Add Memory Support in Windows Server 2003.

Use ECC (Error Correction Code) memory chips, which store parity information used to reconstruct original data when errors are detected in data held in memory.

Use fault-tolerant NICs and network devices (switches).

Summary

Clustering is a relatively new technology and has a reputation for being fragile. SQL Server 2000 clustering is far simpler than the earlier versions and has proven to be much more reliable. Today, clustering on SQL Server 2000 and SQL Server 2005 is a highly reliable technology, but it still has many dependencies that prevent it from meeting your high-availability goals. Foremost among these dependencies is a staff that is trained and knowledgeable. Running a close second is having operating processes and procedures that are designed to work specifically with a SQL Server cluster. Ensure that you address all of your clustering dependencies to deliver high availability with SQL Server clustering.




Posted in SQL | Tagged: , , , , , , , , , , , , , , , , , , | Leave a Comment »

Six steps to configure SQL server on a SAN

Posted by Alin D on June 8, 2011

Storage area networks, (SANs), make it easy to connect massive amounts of expandable storage to a server. SANs are particularly useful for SQL Server installations: Enterprise databases don’t just require a great deal of storage; they also have continually-expanding storage needs. That said, you need to take some care when using SANs in clustered SQL Server environments. In this tip, I’ll give you some suggestions to keep in mind when setting up a SQL Server cluster on a SAN.

1. Get manufacturer-specific guidelines for tuning

SANs are not all built the same. Know your SAN before you hook it up and start populating it with data. For instance, you must understand how to prepare disks and what recommendations the manufacturer offers so they will work well in a clustered Windows Server environment. Check to see if the SAN you’re using has actually been tested in a clustered environment or not.

For instance, you will likely have to use the DISKPART.EXE utility (included in Windows 2003 Service Pack 1) to fine-tune disk-track alignment. Hewlett-Packard Co. is one company that provides with its storage devices detailed documentation about how to perform this kind of tuning for Windows Server 2003. (This is usually referred to as the “LUN offset” on SANs.)

2. Use RAID-10 whenever possible

This isn’t a cluster-specific piece of advice but it’s important nonetheless. If cost is less important than data integrity, use RAID-10 for your SAN, which is widely considered one of the best storage arrangements for databases although it comes at a higher cost.

For those not familiar with it, RAID-10 is “nested RAID,” or a RAID-0 array made from a set of RAID-1 arrays. It’s also been described as a stripe of mirrors. This is an extremely robust and efficient setup; RAID-10 is not just highly fault-tolerant, but it supports fast writing, too, which is critical in a database.

When you set up a RAID-10 system, put data and log files on

different sets of mirrored spindles to enhance both your speed and your recovery options. The more physical spindles you can spread your data out over and the more redundancy and parallelism you can get the better.

RAID-5 is also commonly recommended for databases, but RAID-5 is best on read-only volumes. RAID-10 is best in any scenario where disk activity has more than 10% writes, which is probably the vast majority of databases out there. For very large databases that grow into the terabytes, you could even consider RAID-100, which adds yet another level of nesting and striping (also called “plaid RAID”).

3. Active/active and active/passive considerations

An active/active (a/a) cluster should get a different disk arrangement than an active/passive (a/p) cluster. An a/a cluster has two nodes or servers, which are both active at the same time, balancing the load between them and mirroring each other’s updates. If one server goes offline, the other can pick up the slack as needed. An a/p arrangement has one server running continuously with the other server sitting idle. If the main server fails, only then does the backup server kick in.

With a/a clusters, each database server should get its own set of mirrored disk spindles; the two should not share the same logical drive for their databases. This is obviously more expensive, but if you want the best possible uptime, then the cost involved in adding the needed disks will be well worth it. Some database administrators go so far as to provide a dedicated SAN to each node of the cluster. However, if the amount of data replicating between nodes outweighs the amount of data going to and from clients, it might make more sense to keep the data for an a/a setup on the same SAN (albeit on different physical disks).

With a/p, you can easily have the database(s) sharing disks or SAN units. Since only one database server is active at any given time, there’s no contention going on.

4. Keep drive lettering consistent across clusters

This is one of the most cluster-specific pieces of advice to keep in mind. All host nodes in a cluster must see the same drives with the same drive letters, so plan your drive lettering cluster-wide. The clustering software controls who has access to a specific device, so you don’t need to worry about that; but each node must have a consistent view of the storage to be used.

5. Don’t try to move temporary databases around

The temporary databases used by SQL Server are part of the failover process and need to be available in a shared context. Don’t try to move them around. You may think you’re getting SAN bandwidth back by hosting temporary databases locally, but it’s not worth doing at the expense of basic functionality.

6. Do backups through mapped drives only

If you’re using a SAN to store SQL Server backups, those backups should be run through a mapped drive letter and not through a UNC name. Failover SQL Server clusters can only work through storage devices registered with the Cluster Service Cluster Administrator. This becomes doubly important if you have a failure and need access to SQL Server backups through a device shared on the cluster. Also remember to keep the advice in tip #4 in mind when mapping out drives for your backups.

 

Posted in SQL | Tagged: , , , , , , | 1 Comment »

Active Directory Rights Management Services (AD RMS)

Posted by Alin D on January 19, 2011

Active Directory Rights Management Services (AD RMS) is an information protection technology that works with AD RMS-enabled applications to help safeguard digital information from unauthorized use. Content owners can define who can open, modify, print, forward, or take other actions with the information.

Introduction

Your organization’s overall security strategy must incorporate methods for maintaining security, protection, and validity of company data and information. This includes not only controlling access to the data, but also how the data is used and distributed to both internal and external users. Your strategy may also include methods to ensure that the data is tamperresistant and that the most current information is valid based on the expiration of outdated or time-sensitive information.
AD RMS enhances your organization’s existing security strategy by applying persistent usage policies to digital information. A usage policy specifies trusted entities, such as individuals, groups of users, computers, or applications. These entities are only permitted to use the
information as specified by the rights and conditions configured within the policy. Rights can include permissions to perform tasks such as read, copy/paste, print, save, forward, and edit. Rights may also be accompanied by conditions, such as when the usage policy expires for a
specific entity. Usage policies remain with the protected data at all times to protect information stored within your organization’s intranet, as well as information sent externally via e-mail or transported on a mobile device.

AD RMS Features

An AD RMS solution is typically deployed throughout the organization with the goal of protecting sensitive information from being distributed to unauthorized users. The addition of AD RMS–enabled client applications such as the 2007 Office system or AD RMS–compatible server roles such as Exchange Server 2007 and Microsoft Office SharePoint Server 2007 provides an overall solution for the following uses:

Enforcing document rights

Every organization has documents that can be considered sensitive information. Using AD RMS, you can control who is able to view these sensitive files and prevent readers from accessing selected application functions, such as printing, saving, copying, and pasting. If a group of employees is collaborating on a document and frequently updating it, you can configure and apply a policy that includes an expiration date of document rights for each published draft. This helps to ensure that all
involved parties are using only the latest information—the older versions will not open after they expire.

Protecting e-mail communication

Microsoft Office Outlook 2007 can use AD RMS to prevent an e-mail message from being accidentally or intentionally mishandled. When a
user applies an AD RMS rights policy template to an e-mail message, numerous tasks can be disabled, such as forwarding the message, copying and pasting content, printing, and exporting the message.

Depending on your security requirements, you may have already implemented a number of technologies to secure digital content. Technologies such as Access Control Lists (ACLs), Secure Multipurpose Internet Mail Extensions (S/MIME), or the Encrypted File System (EFS) can all be used to help secure e-mail and company documents. However, AD RMS still provides additional benefits and features in protecting the confidentiality and use of the data stored within the documents.

Active Directory Rights Management Services Components

The implementation of an AD RMS solution consists of several components, some of which are optional. The size of your organization, scalability requirements, and data sharing requirements all affect the complexity of your specific configuration.

Figure 1

AD RMS Root Cluster

The AD RMS root cluster is the primary component of an RMS deployment and is used to manage all certification and licensing requests for clients. There can be only one root cluster in each Active Directory forest that contains at least a single Windows Server 2008 server that runs the AD RMS server role. You can add multiple servers to the cluster to be used for redundancy and load balancing. During initial installation, the AD RMS root cluster performs an automatic enrollment that creates and signs a server licensor certificate (SLC). The SLC is
used to grant the AD RMS server the ability to issue certificates and licenses to AD RMS clients. In previous versions of RMS, the SLC had to be signed by the Microsoft Enrollment Service over the Internet. This required Internet connectivity from either the RMS server or from another computer to be used for offline enrollment of the server. Windows Server 2008 AD RMS has removed the requirement to contact the Microsoft Enrollment Service. Windows Server 2008 includes a server self-enrollment certificate that is used to sign the SLC locally. This removes the previous requirement for an Internet connection to complete the RMS
cluster enrollment process.

Web Services

Each server that is installed with the AD RMS server role also requires a number of Webrelated server roles and features. The Web Server (IIS) server role is required to provide most of the AD RMS application services, such as licensing and certification. These IIS-based services are called application pipelines. The Windows Process Activation Service and Message Queuing features are also required for AD RMS functionality. The Window Process Activation Service is used to provide access to IIS features from any application that hosts Windows Communication Foundation services. Message Queuing provides guaranteed message delivery between the AD RMS server and the SQL Server database. All transactions are first written to the message queue and then transferred to the database. If connectivity to the database is lost, the transaction information will be queued until connectivity  resumes.
During the installation of the AD RMS server role, you specify the Web site on which the AD RMS virtual directory will be set up. You also provide the address used to enable clients to communicate with the cluster over the internal network. You can specify an unencrypted URL, or you can use an SSL certificate to provide SSL-encrypted connections to the cluster.

Licensing-only Clusters

A licensing-only cluster is optional and is not part of the root cluster; however, it relies on the root cluster for certification and other services (it cannot provide account certification services on its own). The licensing-only cluster is used to provide both publishing licenses and use licenses to users. A licensing-only cluster can contain a single server, or you can add multiple servers to provide redundancy and load balancing. Licensing-only clusters are typically deployed to address specific licensing requirements, such as supporting unique rights management
requirements of a department or supporting rights management for external business partners as part of an extranet scenario.

Database Services

AD RMS requires a database to store configuration information, such as configuration settings, templates, user keys, and server keys. Logging information is also stored within the database. SQL Server is also used to keep a cache of expanded group memberships obtained from Active Directory to determine if a specific user is a member of a group. For production environments, it is recommended that you use a database server such as SQL Server 2005 or later. For test environments, you can use an internal database that is provided with Windows Server 2008; however, the internal database only supports a single-server root cluster.

How AD RMS Works

Server and client components of an AD RMS solution use various types of eXtensible rights Markup Language (XrML)–based certificates and licenses to ensure trusted connections and protected content. XrML is an industry standard that is used to provide rights that are linked to the use and protection of digital information. Rights are expressed in an XrML license attached to the information that is to be protected. The XrML license defines how the information owner wants that information to be used, protected, and distributed.

AD RMS Deployment Scenarios

To meet specific organizational requirements, AD RMS can be deployed in a number of different scenarios. Each of these scenarios offers unique considerations to ensure a secure and effective rights-management solution. These are some possible deployment scenarios:

■ Providing AD RMS for the corporate intranet
■ Providing AD RMS to users over the Internet
■ Integrating AD RMS with Active Directory Federation Services

Deploying AD RMS within the Corporate Intranet

A typical AD RMS installation takes place in a single Active Directory Forest. However, there may be other specific situations that require additional consideration. For example, you may need to provide rights-management services to users throughout a large enterprise with multiple branch offices. For scalability and performance reasons, you might choose to implement licensing-only clusters within these branch offices. You may also have to deploy an AD RMS solution for an organization that has multiple Active Directory forests. Since each
forest can only contain a single root cluster, you will have to determine appropriate trust policies and AD RMS configuration between both forests. This will effectively allow users from both forests to publish and consume rights-management content.

Deploying AD RMS to Users over the Internet

Most organizations have to support a mobile computing workforce, which consists of users that connect to organizational resources from remote locations over the Internet. To ensure that mobile users can perform rights-management tasks, you have to determine how to
provide external access to the AD RMS infrastructure. One method is to place a licensing-only server within your organization’s perimeter network. This will allow external users to obtain use and publishing licenses for protecting or viewing information. Another common solution
is to use a reverse proxy server such as Microsoft Internet Security and Acceleration (ISA) Server 2006 to publish the extranet AD RMS cluster URL. The ISA server will then handle all requests from the Internet to the AD RMS cluster and passes on the requests when necessary. This is a more secure and effective method, so it is typically recommended over
placing licensing servers within the perimeter network location.

Deploying AD RMS with Active Directory Federation Services

Windows Server 2008 includes the Active Directory Federation Services (AD FS) server role, which is used to provide trusted inter-organizational access and collaboration scenarios between two organizations. AD RMS can take advantage of the federated trust relationship as a basis for users from both organizations to obtain RAC, use, and publishing licenses. In order to install AD RMS support for AD FS, you will need to have already deployed an AD FS solution within your environment. This scenario is recommended if one organization has AD RMS and the other does not. If both have AD RMS, trust policies are typically recommended.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , | Leave a Comment »

Setting up Transactional Replication in SQL Server 2008 R2.

Posted by Alin D on December 9, 2010

Replication is one of the High Availability features available in SQL Server. Transactional Replication is used when DML or DDL schema changes performed on an object of a database on one server needs to be reflected on the database residing on another server. This change happens almost in real time (i.e. within seconds). In this article, I will demonstrate the step by step approach to configuring transactional replication in SQL Server 2008 R2.

Scenario: An Address table which belongs to the Person schema in the Adventureworks Database is replicated to the Adventureworks_Replication database residing on the same server. The Adventureworks_Replication database acts as a subscriber. The subscriber is normally present on a  separate database server.

Before we start with the configuration, we need to understand three important terms:

1. Publisher
2. Subscriber
3. Distributor Database

Let’s discuss each these in detail.

Publisher:

The Publisher can be referred to as a database on which the DML or DDL schema changes are going to be performed.

Subscriber:

The Subscriberis the  database which is going to receive the DML as well as DDL schema changes which are performed on the publisher. The subscriber database normally resides on a different server in another location.

Distribution Database:

A database which contains all the Replication commands. Whenever any DML or DDL schema changes are performed on the publisher, the corresponding commands generated by  SQL Server are stored in the Distribution database. This database can reside on the same server as the publisher, but it is always recommended to keep it on a separate server for better performance. Normally, I have observed that if you keep the distributoion database on the same machine as that of the publisher database and if there are many publishers then it always has an impact on the performance of the system. This is because for each publisher, one distrib.exe file gets created.

Let us now begin with the Configuring of the Transactional Replication.

There are 3 steps involved for Configuring the Transactional Replication:

1. Configuring the Distribution Database.

2. Creating the publisher.

3. Creating the subscriber.

Configuring the Distribution Database

1. Connect to the Microsoft SQL Server 2008 R2 Management Studio.

2.  Right Click on the Replication node and Select Configure Distribution as shown in the screen capture below:

3. A new window appears on the screen as shown in the screen capture below:

4. Click  the Next> button and a new window appears on the screen as shown in the screen capture below:

5. As you can see in the above screen capture, it gives the user two choices. The first choice says that whether the server on which the Replication will be configured will be Hosting the distribution database. The second choice asks the user whether some other server will be Hosting the distribution database. The user can select any one of the either choices are per his/her requirements. I decide to use the First option, i.e. the server on which the Replication is configured will itself be holding the distribution database. Then Click on the Next> button as shown in the screen capture above.

6. A new window appears as shown in the screen capture below:

7. Select the first option, i.e. Yes, configure the SQL Server Agent service to start automatically and click on the Next> button as shown in the screen capture above.

8. A new window appears on the screen as shown in the screen capture below:

As you can see in the above screen capture, you are asked where the Snapshot folder should reside on the Server. Let us first understand what the Snapshot folder exactly is.

The Snapshot Agent prepares snapshot files containing schema and data of published tables and database objects, stores the files in the snapshot folder. This folder should never be placed on the C drive of the server i.e. the drive which is hosting the Operating System.

Create a folder on any other drive to hold the Snapshot folder and Click on the Next> button as shown in the screen capture above.

9. A new window appears as shown in the screen capture below:

As you can see in the above screen capture, it displays information such as what will be the distribution database name, the location where the data and the log file will reside. Click on the Next> button as shown in the screen capture above.

10. A new window appears as shown in the screen capture below:

11. Click on the Next> button.

12. Click on the Next> button as shown in the screen capture below:

13. Click on the Finish button as shown in the screen capture below:

14. Once done, a new database named distribution gets created. In order to confirm it just expand the System Database node and you shall be able to view the distribution database, please refer the screen capture below:

Creating the Publisher

The following steps need to be followed while creating the publisher.

1. Right Click on Local Publications and select New Publications, please refer the screen capture below:

2. Click on the Next> button as shown in the screen capture below.

3. Select the database which is going to act as a publisher. In our case, I select the AdventureWorks database. Please refer the screen capture below and Click on the Next> button.

4. Select Transactional Replication from the available publication type and Click on the Next> button as shown in the screen capture below:

5. Select the Objects that you want to publish. In this example, we will select a table named Person which we need to Replicate. Select the table as shown in the screen capture below and Click on the Next> button. One important point to note is that Only those tables can be replicated in Transaction Replication which has a Primary Key column in it.

6. Since there are no filtering conditions, Click on the Next> button as shown in the screen capture below:

7. Check the first radio button as shown in the screen capture below and Click on the Next> button.

8. Click on the Security Settings tab as shown in the screen capture below.

A new window appears as shown in the screen capture below.

Select Run under the SQL Server Agent service account as the account under which the Snapshot Agent process will run and Connect to the publisher By impersonating the process account as shown in the screen capture below and then Click on the OK button.

Click on the Next> button as shown in the screen capture below.

9. Click on the Next> button as shown in the screen capture below.

10. Give a suitable name to the publisher ad Click on the Finish button as shown in the screen capture below.

Creating the Subscriber

Once the publisher is created the next step is to create the subscriber for it.

The following steps needs to be performed for creating the subscriber.

1. Right Click on the publisher created and select New Subscriptions as shown in the screen capture below.

2. Click on the Next> button as shown in the screen capture below.

3. Click on the Next>  button as shown in the screen capture below.

4. Click on the Next> button as shown in the screen capture below.

5. As shown in the screen capture below, it asks for the Subscriber name as well as the subscription database. The subscriber database can be created by restoring the publisher database at the start itself or by creating a new database as shown in the screen capture below.

If you have already restored the backup of the database which is a publisher, then the database name will appear in the dropdown as shown in the screen capture below:

If we wan’t to now create the subscriber database then it can be done as follows:

Click on New Database as shown in the screen capture below.

A new window appears as shown below. Give a suitable database name as well as the path where the data and log file are going to reside.

Click on the OK button.

If the subscriber is some other server, then the following steps need to be performed.

Click on the down arrow available on the Add Subscriber button as shown in the screen capture below.

Click on Add SQL Server Subscriber as shown in the screen capture above.

A new window appears which asks for the SQL Server Name as well as the Authentication neeed to connect to the SQL Server, please refer the screen capture below.

6. Click on the Next> button as shown in the screen capture below.

7. Click on the button as shown in the screen capture below. Here we need to specify

Process account as well as the connection options for the distribution agent.

.  A new window appears as shown in the screen capture below.

9. Specify the distribution agent to run under the SQL Server Agent Service Account. Also connect to the distributor as well as the subscriber by impersonating the process account. Please refer the screen capture below.

10. Click on the OK button as shown in the screen capture above.

11. Click on the Next> button as shown in the screen capture below.

12. Ensure that the Agent is scheduled to Run Continuously and then click on the Next> button as shown in the screen capture below.

13. Ensure that the Subscriber is initialized immediately and then click on the Next> button as shown in the screen capture below.

14. Click on the Next> button as shown in the screen capture below.

15. Click on the Finish button as shown in the screen capture below.

16. This creates a subscriber for the corresponding publisher.

17. Expand the publisher node and you shall be able to view the subscriber as shown in the screen capture

Thus, we have successfully set the Transactional Replication in SQL Server 2008 R2.

Posted in SQL | Tagged: , , , , , | Leave a Comment »

An Epigrammatic Account of Sql

Posted by Alin D on September 4, 2010

An Epigrammatic Account of Sql

The time gone by of SQL begins in an IBM laboratory in San Jose, California, where on earth SQL was urbanized in the late 1970s. The fundamental pose for Structured Query Language and the language itself is time and again referred to as “sequel.” It was in the inauguration built-up for IBM’s DB2 item for consumption as a basic criterion of a relational database management system, or RDBMS.. In fact, SQL creates an RDBMS achievable. SQL is a nonprocedural language, in disparity to the procedural or third-generation languages such as COBOL and C that had been created up to that time. The quality that categorizes a DBMS from an RDBMS is that the RDBMS provides a set-oriented database language. For most RDBMSs, this set-oriented database language is SQL. Two standards association, the American National Standards Institute and the International Standards Organization, currently prop up SQL standards to exchange. The ANSI-92 standard is the customary for the SQL used throughout this article. Although these standard-making bodies systematize standards for database system designers to tag along, all database products differ from the ANSI standard to some degree. In addition, most systems provide some proprietary extensions to SQL that extend the language into a true procedural language. We have used various RDBMSs to prepare the examples in this article to give you an idea of what to expect from the common database systems.

It was an inquiring feeling whether there is a modest background on the evolution of databases and database conjecture would facilitate us value the workings of SQL. Database systems stock up in sequence in every feasible business environment. From outsized pathway databases such as airline proviso systems to a child’s baseball card collection, database systems store and hand out the data that we depend on. Until the last few years, large database systems could be run only on large mainframe computers. These machines have traditionally been expensive to design, purchase, and maintain. However, today’s generation of powerful, inexpensive workstation computers enables programmers to design software that maintains and distributes data quickly and inexpensively.

Replica of Database

1. The largest part of popular data storage model is the relational database, which was bedded on a formative paper named “A Relational Model of Data for Large Shared Data Banks,” written by Dr. E. F. Codd in 1970. SQL steps forward to service on the conception of the relational database introduced by Dr. Codd who had promulgated such new exploration for creating and building object orient programming software to be based on the 13 rules, referred to as Codd’s 12 Rules, for the relational model which are the basic milestone in RDBMS concept.:

2. The following rules have been explored by Dr. Codd which are basically known as ‘Dr. Codd’s Database rules.

1. All information in a relational database (including table and column names) is represented explicitly as values in tables.

2. Every value in a relational database is guaranteed to be accessible by using a combination of the table name, primary key value, and column name.

3. The DBMS provides systematic support for the treatment of null values (unknown or inapplicable data), distinct from default values, and independent of any domain.

4. The description of the database and its contents is represented at the logical level as tables and can therefore be queried using the database language.

5. At least one supported language must have a well-defined syntax and be comprehensive. It must support data definition, manipulation, integrity rules, authorization, and transactions.

6. All views that are theoretically updatable can be updated through the system.

7. The DBMS supports not only set-level retrievals but also set-level inserts, updates, and deletes.

8. Application programs and ad hoc programs are logically unaffected when physical access methods or storage structures are altered.

9. Application programs and ad hoc programs are logically unaffected, to the extent possible, when changes are made to the table structures.

10. The database language must be capable of defining integrity rules. They must be stored in the online catalog, and they cannot be bypassed.

11. Application programs and ad hoc requests are logically unaffected when data is first circulated or when it is reallocate.

12. It ought not to be potential to get around the integrity rules defined through the database language by using lower-level languages.

A good number database has had a “parent/child” relationship; that is, a parent node would contain file pointers to its children. This method has several advantages and many disadvantages. In its favor is the fact that the physical structure of data on a disk becomes unimportant. The programmer simply stores pointers to the next location, so data can be accessed in this manner. Also, data can be added and deleted easily. However, different groups of information could not be easily joined to form new information. The format of the data on the disk could not be arbitrarily changed after the database was created. Doing so would require the creation of a new database structure. Codd’s idea for an RDBMS uses the mathematical concepts of relational algebra to break down data into sets and related common subsets. Because information can naturally be grouped into distinct sets, Dr. Codd organized his database system around this concept. Under the relational model, data is separated into sets that resemble a table structure. This table structure consists of individual data elements called columns or fields. A single set of a group of fields is known as a record or row. For instance, to create a relational database consisting of employee data, you might start with a table called EMPLOYEE that contains the following pieces of information: Name, Age, and Occupation. These three pieces of data make up the fields in the Job holder table.

Job holder table.

Name Age Occupation

Mehedi 12 Electrical engineer

Gias 44 Museum curator

Kaium 42 Assistant Chef

Abdul Karim 29 Student

Mohammad 32 Game programmer

Kamruzzaman 46 Singer

The six rows are the records in the Job holder table.

. To retrieve a specific record from this table, for example, Dave Davidson, a user would instruct the database management system to retrieve the records where the NAME field was equal to Dave Davidson. If the DBMS had been instructed to retrieve all the fields in the record, the employee’s name, age, and occupation would be returned to the user. SQL is the language that tells the database to retrieve this data. A sample SQL statement that makes this query is

SELECT *

FROM EMPLOYEE

It is important to note that the exact syntax is not important at this point. Due to the fact that the various data items can be grouped according to obvious relationships, the relational database model gives the database designer a great deal of flexibility to describe the relationships between the data elements. Through the mathematical concepts of join and union, relational databases can quickly retrieve pieces of data from different sets (tables) and return them to the user or program as one “joined” collection of data. The join feature enables the designer to store sets of information in separate tables to reduce repetition.

Duty table.

Name Duties

Skender Cook

Lily Huq Teacher

Shovon Dancer

Idiorty Superintendent

Designing the Database Structure

The vital decision for a database designer, after the hardware platform and the RDBMS have been preferred, is the structure of the tables. Decisions made at this stage of the design can affect performance and programming later during the development process. The process of separating data into distinct, unique sets is called normalization.

Modern Database Panorama

Computing technology has made a permanent change in the ways businesses work around the world. Information that was at one time stored in warehouses full of filing cabinets can now be accessed instantaneously at the click of a mouse button. Orders placed by customers in foreign countries can now be instantly processed on the floor of a manufacturing facility. Even though 20 years ago much of this information had been transported onto corporate mainframe databases, offices still operated in a batch-processing environment. If a query needed to be performed, someone notified the management information systems (MIS) department; the requested data was delivered as soon as possible. In addition to the development of the relational database model, two technologies led to the rapid growth of what are now called client/server database systems. The first important technology was the personal computer. Inexpensive, easy-to-use applications such as Lotus 1-2-3 and Word Perfect enabled employees (and home computer users) to create documents and manage data quickly and accurately. Users became accustomed to continually upgrading systems because the rate of change was so rapid, even as the price of the more advanced systems continued to fall.

The second important technology was the local area network (LAN) and its integration into offices across the world. Although users were accustomed to terminal connections to a corporate mainframe, now word processing files could be stored locally within an office and accessed from any computer attached to the network. After the Apple

Pages: 1 2

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Host PHP in the Cloud with Windows Azure

Posted by Alin D on August 24, 2010

More than a buzzword in executive meetings, cloud computing is the next big thing in the world of IT. Clouds offer an infinite amount of resources, both on demand and in pay-per-use models: computer resources on tap! In this article, I’ll focus on one of these cloud platforms, Microsoft’s Windows Azure, and give you all the information you need to get started developing PHP applications on this platform. Although we won’t go too deep into the technicalities, I will point you to further information and resources on specific points as we go.

Different Clouds

Choice is a good thing. The great news for us developers is that there are many choices when it comes to cloud computing. Microsoft, Google, Amazon, Rackspace, GoGrid, and many others offer cloud products that have their own special characteristics. It looks like the whole world is dividing these offers into two distinct categories: IaaS (Infrastructure-as-a-Service) and PaaS (Platform-as-a-Service)—the difference between the two is illustrated in Figure 1, “The difference between cloud platforms”.

Figure 1. The difference between cloud platforms
The difference between cloud platforms

First, let’s look at IaaS. Amazon EC2 was the first to offer virtual machines that could run your application. These virtual machines, however, are under your control, like physical servers in your data center. This means that you’re in control of patches, security, maintenance to the operating system—and all with full root or administrator access. The cloud platform takes the infrastructure woes out of your hands, as networking, load balancers, and firewalls are handled for you.

Next, there’s PaaS. This approach is also based on virtual machines, but you don’t have control over them. Instead, a set of tools and APIs is provided to let you package your application and upload it onto your virtual machine, so the only thing you have to worry about is your application. The networking, operating system, and so on are all maintained by the cloud platform.

All cloud vendors share common features, including virtual machines, and storage that’s available through REST-based protocols. Then again, each offering has its own unique features, which is good: clouds are still in a very innovative phase, and as developers we have the luxury of choosing the platform that’s best suited to our particular applications.

Windows Azure Platform Overview

Throughout this article, I’ll be describing the Windows Azure Platform, Microsoft’s PaaS offering to the world of cloud computing. But before we dive into technical details, let’s get a feel for the components included in this offering, and what they do.

Windows Azure

Windows Azure is the core component of the Windows Azure Platform. The marketing folks describe this component as the “operating system for the Azure cloud.” I’m not a big fan of marketing folks and their quotes, but for once, they’re right! Windows Azure is the heart of Microsoft’s offering, and it does what you’d expect of any operating system: it allows you to run your application on a virtual machine, either in a web role (with a web server installed) or in a worker role—a cleaner virtual machine that allows you to host other types of applications.

Windows Azure also allows you to scale up rapidly: simply change a configuration value and you’ll have multiple instances running a the snap of your fingers. Load balancing is taken care of automatically and requires no configuration.

Next to the operating system, a set of storage services is included, which is accessible through a REST-based API. Blob storage allows you to host any file: text files, images, downloads, and more. Table storage is, in essence, a document database that has limited querying possibilities but can scale massively. And then there are queues, which are mostly used for communications between web and worker roles.

Windows Azure is the location where your application will be hosted. A web role will host your web application; you’ll probably use blob storage to store files, and possibly table storage (or SQL Azure, which we’ll discuss in a moment) to store your data. Windows Azure is also used by other components of the platform.

SQL Azure

In addition to hosting, you will probably need a place where you can store your relational data. This is where SQL Azure comes in: it’s a slightly modified version of Microsoft SQL Server that delivers all the services you’d expect from a database: tables, views, indexes, stored procedures, triggers, and so on.

SQL Azure provides database services in a scalable and reliable way. Data is replicated across different sites and made available through a load balancer, giving you a lot of performance on the data layer of your application.

Windows Azure Platform AppFabric

Windows Azure Platform AppFabric is, in essence, a combination of two products. There’s an Access Control Service to which you can delegate the tasks of authentication and authorization of users, and there’s the Service Bus, which, in my opinion, is one of the features that really makes Windows Azure stand out. In short, the service bus allows you to establish communication between two endpoints. That might be a service that publishes messages to a set of subscribers, but the service bus can also be used for punching holes in firewalls!

Imagine having applications A and B, each in different networks, behind different firewalls. No direct communication seems possible, yet the AppFabric service bus will make sure both applications can communicate. There’s no need to open up ports in your company’s firewall to have your cloud application communicate with an on-premises application.

Live Services

Live Services provides an online identity system that you probably already know: Windows Live ID. Live Services also offers features like presence awareness, search, mapping via Bing Maps, synchronization, and more.

Codename Projects: Dallas and Sydney

These products are still in their incubation phases, and will probably undergo some changes in the future. Nevertheless, they already offer some great features. Dallas is basically a Data-as-a-Service solution through which you can subscribe to various sets of data offered in an open format, OData, which is based on REST and Atom. It also provides your business with a new source of revenue: if you’re sitting on a lot of useful data, why not make it available via Dallas and have others pay for using it?

Project Sydney is different: it’s focused on how you communicate with your cloud application. Currently, that communication is completed through the public Internet, but Sydney will allow you to set up a VPN connection to your virtual machines, enabling you to secure communications using your own security certificates and such.

Tools and APIs Available for PHP

When we’re talking about using PHP in a cloud platform like Windows Azure, there are some objectives that we should fulfil before we start to work with the cloud. You’ll need the right tools to build and deploy your application, but also the right APIs—those that allow you to use the platform and all of its features.

Microsoft has been doing a lot of good work in this area. Yes, Windows Azure is a Windows-based platform that seems to target only .NET languages. However, when you look at the tools, tutorials, APIs, and blog posts around PHP and Windows Azure, it is clear that PHP is an equally valued citizen of the platform!

Let’s take a tour of all the tools and APIs that are available for PHP on Windows Azure today. A lot of these tools are very easy to install using the Web Platform Installer—a “check-next-finish” wizard that allows you to install platforms and tools in an easy and efficient manner.

IDE Support

Of course, you can use your favorite editor to work on a PHP application that’ll be hosted on Windows Azure. On the other hand, if you’re using an Eclipse-based editor like Eclipse PDT, Zend Studio, or Aptana, you can take advantage of a great plugin that will speed up your development efforts, shown in Figure 2, “Using Eclipse for development”. The Eclipse plugin for Windows Azure is available at http://windowsazure4e.org. Also, Josh Holmes has prepared a handy post, Easy Setup for PHP on Azure Development.

Figure 2. Using Eclipse for development
Using Eclipse for development

After installing the plugin, you’ll find the following features have been added to your IDE:

  • Project Creation and Migration allows for the easy migration of an existing application to a Windows Azure application. This tool will get your application ready for packaging and deployment to Windows Azure.
  • Storage Explorer provides access to your Windows Azure storage accounts and allows you to upload and download blobs, query tables, list queues, and so on.
  • Debugging and local testing is also included: there’s no need to deploy and test your application immediately on Windows Azure. A “local cloud” simulation environment is available.

Packaging

Once your application is ready for deployment, it should be packaged for Windows Azure. Packaging is basically the process of creating a ZIP archive of your application and embedding a manifest of all the included files and their configuration requirements.

The Eclipse plugin for Windows Azure contains this feature. However, if you don’t use Eclipse as your IDE, or if you’re working in a non-Windows environment, you can package your application using the Windows Azure command-line tools for PHP developers.

Development Tools and SDKs

Next, let’s take a spin around some of the tools and SDKs that Windows Azure makes available to developers.

Windows Azure SDK for PHP

If you’re planning on migrating an application or building a new one for Windows Azure, chances are that you’ll need storage. This is where the Windows Azure SDK for PHP comes in handy: it gives you easy access to the blob storage, table storage and queue services provided by Windows Azure. You can download this SDK as a stand-alone, open-source package that allows you to access storage from both on-premises locations and your cloud application. If you’re using the Eclipse plug-in we discussed earlier, you’ll find this API is included.

The process of utilizing storage always starts with setting up your credentials: an account name and a shared key (think of this as a very long password). Then, you can use one of the specific classes available for blob storage, table storage, or queue storage.

Here’s an example of blob storage in action. First, I create a container (think of this as a virtual hard drive). Then, I upload a file from my local hard drive to blob storage:

/** Microsoft_WindowsAzure_Storage_Blob */
require_once 'Microsoft/WindowsAzure/Storage/Blob.php';

$storageClient = new Microsoft_WindowsAzure_Storage_Blob();
$storageClient->createContainer('testcontainer');

// upload /home/maarten/example.txt to Windows Azure
$result = $storageClient->putBlob('testcontainer', 'example.txt', '/home/maarten/example.txt');

Reading the blob afterwards is fairly straightforward:

/** Microsoft_WindowsAzure_Storage_Blob */
require_once 'Microsoft/WindowsAzure/Storage/Blob.php';

$storageClient = new Microsoft_WindowsAzure_Storage_Blob();

// download file to /home/maarten/example.txt
$storageClient->getBlob('testcontainer', 'example.txt', '/home/maarten/example.txt');

Table storage is a bit more complex. It’s like a very scalable database that’s not bound to a schema, and has limited querying possibilities. To use table storage, you’ll require some classes that can be used both by your PHP application and Windows Azure table storage. Here’s an example class representing a person:

class Person extends Microsoft_WindowsAzure_Storage_TableEntity
{
  /**
   * @azure Name
   */
  public $Name;

  /**
   * @azure Age Edm.Int64
   */
  public $Age;
}

Inserting an instance of Person into the table is as easy as creating a new instance and assigning it some properties. After that, the table storage API in the Windows Azure SDK for PHP allows you to insert the entity into a table named testtable:

/** Microsoft_WindowsAzure_Storage_Table */
require_once 'Microsoft/WindowsAzure/Storage/Table.php';

$entity = new Person('partition1', 'row1');
$entity->Name = "Maarten";
$entity->Age = 25;

$storageClient = new Microsoft_WindowsAzure_Storage_Table('table.core.windows.net', 'myaccount', 'myauthkey');
$storageClient->insertEntity('testtable', $entity);

That was a lot of information in one code snippet! First of all, what are partition1 and row1? Well, those are the partition key and row key. The partition key is a logical grouping of entities. In an application where users can contribute blog posts, for example, a good candidate for the partition key would be the username—this would allow you to easily query for all data related to a given user. The row key is the unique identifier for the row.

Queues follow the same idea—there’s an API that allows you to put, get, and delete messages from the queue on Windows Azure. Queues are also guaranteed to be processed: when a message is read from the queue, data is made invisible for a specific time. If, after that time, the message has not been explicitly removed, for example because a batch script has crashed, the message will re-appear and be available for processing again.

The Windows Azure SDK for PHP also has some extra features that are specific to both PHP and Windows Azure. This includes features like a session storage provider that allows you to share web session data over multiple web role instances. Another feature is a stream wrapper that allows you to use standard file functions like fopen on blob storage.

An example application, ImageCloud, which uses all the features described above, is available for download on my blog.

SQL Server Driver for PHP

The SQL Server Driver for PHP allows PHP developers to access SQL Server databases that are hosted on SQL Server or SQL Azure. The SQL Server Driver for PHP relies on the Microsoft SQL Server ODBC Driver to handle low-level communication with SQL Server. As a result, the SQL Server Driver for PHP is only supported on Windows and Windows Azure. It can be downloaded and installed as a PHP extension.

When you download this driver, be sure to download version 2.0. This version has the additional benefit that it provides PDO (PHP Data Objects) support, which allows you to quickly switch between, for example, MySQL and SQL Server.

Now, let’s imagine you have an SQL Azure database. The following code shows how you can connect to the blog database on your SQL Azure database server and retrieve the posts ordered by publication date:

// Connect to SQL Azure using PDO
$connection = new PDO('bvoj6aovnk.database.windows.net', 'sqladm@bvoj6aovnk', 'mypassword', array('Database' => 'blog'));

// Fetch specific post
$posts = array();
$query = 'SELECT * FROM posts ORDER BY PubDate DESC';
$statement = $connection->query($query);
while ( $row = $statement->fetchObject('Post') ) {
  $posts[] = $row;
}

AppFabric SDK for PHP

As I mentioned before, the Windows Azure Platform AppFabric (not to be confused with the Windows Server AppFabric) enables you to delegate user authentication and authorization, and to punch firewalls and connect applications across different protected networks with ease. You can download it from http://dotnetservicesphp.codeplex.com.

In terms of authentication and authorization, it’s important to know a little about claims-based authentication and federation—a topic on which some interesting resources are available. Basically, your application establishes a trust relationship with an authentication authority (like Windows Azure Platform AppFabric), which means that your application trusts users that are authenticated with that authority. Next, your application will ask its users to claim their rights. For example, my application could ask the user to claim that they can create orders:

$requiredClaims = array('CreateOrder' => true);
if (ValidateClaimUtil::ValidateClaims($requiredClaims, "phpservice", 'http://localhost/SalesDashboard/', $signingKey))
{
  // User is allowed to create an order!
}
else
{
  // User is not authorized.
}

The Windows Azure Platform AppFabric Access Control Service will validate that the user has this claim, and sign a security token with that information. Since your application trusts this authority, it will either continue or fail on the basis of whether or not the claim is valid.

Now magine having two applications that cannot connect to each other because of firewall-related policies. If both applications can establish an outgoing connection to the service bus, the service bus will relay communication between the two applications. It’s as easy as that—and incredibly useful if you have a tough IT department!

Figure 3. The benefits of Windows Azure Platform AppFabric Service Bus
The benefits of Windows Azure Platform AppFabric Service Bus

Showing you example code of how this works would lead us too far (since it would involve some configuration and set up tasks). But if you think this sounds like a great feature, check the AppFabric for PHP website, which contains plenty of tutorials on this matter.

Other Features

In addition all the features and APIs we’ve already investigated, there are a number of other features and products that are worth looking at. These features aren’t always Windows Azure-specific, like the URL rewriting module for IIS7, but your application can benefit greatly from them all the same.

PHP Azure Contributions

The Windows Azure platform provides some useful features, like reading configuration files (which can be modified even after a deployment has been done), or logging into the Windows Azure environment and accessing local storage on a virtual machine to store files temporarily. Unfortunately, these features are baked in to the Windows Azure Cloud Guest OS, and not available as REST services. Luckily however, these features are exposed as a C dynamic link library, which means that writing a PHP extension to interface with them is a logical step. And that’s exactly what the PHP Azure Contributions library provides: a PHP extension to make use of configuration data, logging, and local storage. Imagine having a configuration value named EmailSubject in your ServiceConfiguration.csdef file. Reading this value is very easy using the PHP Azure Contributions extension:

$$emailSubject = azure_getconfig("EmailSubject");

We can also write data to the Windows Azure diagnostics log. Here’s an example in which I’m writing an informational message in the diagnostics log:

azure_log("This is some useful information!", "Information");

The PHP Azure Contributions project is available on CodePlex at http://phpazurecontrib.codeplex.com.

URL Rewriting

As a PHP developer, you may already use URL rewriting. In Apache’s .htaccess files, it’s very easy to enable the rewrite engine, and to rewrite incoming URLs to real scripts. For example, the URL http://www.example.com/products/books may be mapped to http://www.example.com/index.php?page=products&category=books on your server. This technique is also available in IIS7, the Microsoft web server that’s also used in Windows Azure web roles. The above URL rewriting example can be defined in the Web.config file in the root of your Windows Azure application:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <rewrite>
      <rules>
        <rule name="RewriteProductsUrl" enabled="true" stopProcessing="true">
          <match url="^products/([^/]+)/?$" />
          <conditions>
            <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
            <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
          </conditions>
          <action type="Rewrite" url="index.php?page=products&category={R:1}" />
        </rule>
      </rules>
    </rewrite>
  </system.webServer>
</configuration>

Also note that, because your application is hosted on an IIS web server in Windows Azure, you can use any HttpModule for IIS, just as you would for a traditionally hosted application. This makes it easy to enable output compression, leverage the IIS authentication and authorization features, and more. Download the IIS URL Rewrite module from http://www.iis.net/download/urlrewrite.

WinCache Extension

As you may know, PHP files are interpreted into bytecode and executed from that bytecode on every request. This process is quite fast, but on high-traffic websites, it’s recommended that we cache the bytecode and skip script interpretation. This technique increases a website’s performance without requiring additional resources.

On Linux, accelerator modules that utilize these techniques, like APC and IonCube, are very common. These also work on Windows and could potentially also work on Windows Azure. However, Microsoft also released its own module that applies this technique: the WinCache extension for PHP. This extension is the fastest PHP accelerator on Windows, and also provides features like storing session data in this cache layer. The Wincache extension for PHP can be downloaded from http://www.iis.net/download/wincacheforphp.

CDN (Content Delivery Network)

When using Windows Azure blob storage, you’ll find that a full-featured content delivery network (CDN) is available as well. A CDN ensures that, for example, when a user downloads an image, that image will be retrieved from a storage server that’s close to that user’s client. This ensures that the download speed and latency are optimal, and the user receives the image very quickly.

With blob storage, enabling the CDN is as easy as clicking a button. After that, your public containers are replicated to the CDN, which allows your site’s users to retrieve files and resources as swiftly as possible!

Figure 4. Using the Windows Azure CDN
Using the Windows Azure CDN

Domain Name Mapping

With Windows Azure, your application will be assigned a domain name under the cloudapp.net domain—for example, myphpapp.cloudapp.net. I think you’ll agree that this isn’t the greatest URL. It gets even worse when you’re using blob storage for hosting files: myphpappstorage.blob.core.windows.net is, well, just plain ugly!

Luckily, all URLs in Windows Azure can be mapped a custom domain name. So, to map www.myphpapp.com to myphpapp.cloudapp.net, you just need to add a CNAME record to your name server. The same applies to blob storage: storage.myphpapp.com can be mapped to the very long myphpappstorage.blob.core.windows.net through the addition of a CNAME record to your DNS server.

Conclusion

In this article, we’ve taken a snapshot of the Windows Azure platform from a PHP perspective. While I’m slightly biased by having contributed to the Windows Azure SDK for PHP, I do think that the Windows Azure platform is a great choice for hosting PHP applications in a highly-scalable cloud environment. I also feel that there’s great value to be found in features like the Windows Azure AppFabric Service Bus. The bottom line is: I believe that Microsoft is doing their best in making PHP a first-class citizen on their cloud platform.

Posted in Azure | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

10 mistakes Windows administrators make

Posted by Alin D on August 23, 2010

Maybe you’re a brand new network admin. You’ve taken some courses, you’ve passed some certification exams, perhaps you even have a Windows domain set up at home. But you’ll soon find that being responsible for a company network brings challenges you hadn’t anticipated.

Or maybe you’re an experienced corporate IT person, but up until now, you’ve worked in a UNIX environment. Now — either due to a job change or a new deployment in your current workplace — you find yourself in the less familiar world of Windows.

This article is aimed at helping you avoid some of the most common mistakes made by new Windows administrators.

1: Trying to change everything all at once

When you come into a new job, or start working with a new technology, you may have all sorts of bright ideas. If you’re new to the workplace, you immediately hone in on those things that your predecessors were (or seem to have been) doing wrong. You’re full of all the best practices and tips and tricks that you learned in school. If you’re an experienced administrator coming from a different environment, you may be set in your ways and want to do things the way you did them before, rather than taking advantage of features of the new OS.

Either way, you’re likely to cause yourself a great deal of grief. The best bet for someone new to Windows networking (or to any other job, for that matter) is give yourself time to adapt, observe and learn, and proceed slowly. You’ll make your own job easier in the long run and make more friends (or at least fewer enemies) that way.

2: Overestimating the technical expertise of end users

Many new administrators expect users to have a better understanding of the technology than they do. Don’t assume that end users realize the importance of security, or that they will be able to accurately describe the errors they’re getting, or that they know what you mean when you tell them to perform a simple (to you) task such as going to Device Manager and checking the status of the sound card.

Many people in the business world use computers every day but know very little about them beyond how to operate a few specific applications. If you get frustrated with them, or make them feel stupid, most of them will try to avoid calling you when there’s a problem. Instead they’ll ignore it (if they can) or worse, try to fix it themselves. That means the problem may be far worse when you finally do become aware of it.

3: Underestimating the technical expertise of end users

Although the above applies to many of your users, most companies will have at least a few who are advanced computer hobbyists and know a lot about technology. They’re the ones who will come up with inventive workarounds to circumvent the restrictions you put in place if those restrictions inconvenience them. Most of these users aren’t malicious; they just resent having someone else in control of their computer use — especially if you treat them as if they don’t know anything.

The best tactic with these users is to show them that you respect their skills, seek out their input, and let them know the reasons for the rules and restrictions. Point out that even a topnotch racecar driver who has demonstrated the ability to safely handle a vehicle at high speed must abide by the speed limits on the public roads, and it’s not because you doubt his/her technology skills that you must insist on everyone following the rules.

4: Not turning on auditing

Windows Server operating systems have built-in security auditing, but it’s not enabled by default. It’s also not one of the best documented features, so some administrators fail to take advantage of it. And that’s a shame, because with the auditing features, you can keep track of logon attempts, access to files and other objects, and directory service access.


Active Directory Domain Services (AD DS) auditing has been enhanced in Windows Server 2008 and can be done more granularly now. Without either the built-in auditing or third-party auditing software running, it can be almost impossible to pinpoint and analyze what happened in a security breach.

5: Not keeping systems updated

This one ought to be a no-brainer: Keeping your servers and client machines patched with the latest security updates can go a long way toward preventing downtime, data loss, and other consequences of malware and attacks. Yet many administrators fall behind, and their networks are running systems that aren’t properly patched.

This happens for several reasons. Understaffed and overworked IT departments just may not get around to applying patches as soon as they’re released. After all, it’s not always a matter of “just doing it” — everyone knows that some updates can break things, bringing your whole network to a stop. Thus it’s prudent to check out new patches in a testbed environment that simulates the applications and configurations of your production network. However, that takes time — time you may not have.

Automating the processes as much as possible can help you keep those updates flowing. Have your test network ready each month, for instance, before Microsoft releases its regular patches. Use


Windows Server Update Services (WSUS) or other tools to simplify and automate the process once you’ve decided that a patch is safe to apply. And don’t forget that applications — not just the operating system — need to be kept updated, too.

6: Getting sloppy about security

Many administrators enforce best security practices for their users but get sloppy when it comes to their own workstations. For example, IT pros who would never allow users to run XP every day logged on with administrative accounts think nothing about running as administrators themselves while doing routine work that doesn’t require that level of privileges. Some administrators seem to think they’re immune to malware and attacks because they “know better.” But this over confidence can lead to disaster, as it does in the case of police officers who have a high occurrence of firearms accidents because they’re around guns all the time and become complacent about the dangers.

#7: Not documenting changes and fixes

Documentation is one of the most important things that you, as a network admin, can do to make your own job easier and to make it easier for someone else to step in and take care of the network in your absence. Yet it’s also one of the most neglected of all administrative tasks.

You may think you’ll remember what patch you applied or what configuration change you made that fixed an exasperating problem, but a year later, you probably won’t. If you document your actions, you don’t have to waste precious time reinventing the wheel (or the fix) all over again.

Some admins don’t want to document what they do because they think that if they keep it all in their heads, they’ll be indispensible. In truth, no one is ever irreplaceable — and by making it difficult for anyone else to learn your job, you make it less likely that you’ll ever get promoted out of the job.

Besides, what if you got hit by a truck crossing the street? Do you really want the company to come to a standstill because nobody knows the passwords to the administrative accounts or has a clue about how you have things set up and what daily duties you have to perform to keep the network running smoothly?

#8: Failing to test backups

One of the things that home users end up regretting the most is forgetting to back up their important data — and thus losing it all when a hard drive fails. Most IT pros understand the importance of backing up and do it on a regular schedule. What some busy admins don’t remember to do regularly is test those backups to make sure that the data really is there and that it can be restored.

Remember that making the backup is only the first step. You need to ensure that those backups will work if and when you need them.

#9: Overpromising and underdelivering

When your boss is pressuring you for answers to questions like “When can you have all the desktop systems upgraded to the new version of the software?” or “How much will it cost to get the new database server up and running?”, your natural tendency may be to give a response that makes you look good. But if you make promises you can’t keep and come in late or over budget, you do yourself more damage than good.

A good rule of thumb in any business is to underpromise and overdeliver instead of doing the opposite. If you think it will take two weeks to deploy a new system, give yourself some wiggle room and promise it in three weeks. If you’re pretty sure you’ll be able to buy the hardware you need for $10,000, ask for $12,000 just in case. Your boss will be impressed when you get the project done days ahead of time or spend less money than expected.

10: Being afraid to ask for help

Ego is a funny thing, and many IT administrators have a lot invested in theirs. When it comes to technology, you may be reluctant to admit that you don’t know it all, and thus afraid — or embarrassed — to ask for help. I’ve know MCSEs and MVPs who couldn’t bear to seek help from colleagues because they felt they were supposed to be the “experts” and that their reputations would be hurt if they admitted otherwise. But plunging ahead with a project when you don’t know what you’re doing can get you in hot water, cost the company money, and even cost you your job.

If you’re in over your head, be willing to admit it and seek help from someone more knowledgeable about the subject. You can save days, weeks, or even months of grief by doing so.

PS: This Article was writter so that my manager will see it (i have doubts regarding this) or some other and teach the new Technical people under their responsibility.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , , , , , , , , | Leave a Comment »