Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Posts Tagged ‘Web server’

Best practice for a good Microsoft IIS 7 Security

Posted by Alin D on June 21, 2011

Microsoft’s Internet Information Services (IIS) Web server has presented enterprises with more than its share of security problems over the years, including the infamous Code Red worm nearly a decade ago. A key security concern with IIS has always been the number of features that are automatically installed and enabled by default, such as scripting and virtual directories, many of which proved vulnerable to exploit and led to major security incidents.

With the release of IIS 6 a few years ago, a “lockdown by default” approach was introduced with several features either not being installed or installed but disabled by default. IIS 7, the newest iteration, goes even further. It’s not even installed on Windows Server 2008 by default, and when it is installed, the Web server is configured to serve only static content with anonymous authentication and local administration, resulting in the simplest of Web servers and the smallest attack surface possible to would-be hackers.

This is possible because IIS 7 is completely modularized. Let’s briefly dig into why that is and how it enables a more secure product. Essentially administrators can select from more than 40 separate feature modules to completely customize their installation. By only installing the feature modules required for a particular website, administrators can greatly reduce the potential attack surface and minimize resource utilization.

Be aware, however, this is true only with a clean install. If you are upgrading your Windows OS and running an earlier version of IIS, all the metabase and IIS state information is gathered and persevered. Consequently, many unnecessary Web server features can be installed during an upgrade. Therefore, it is good practice for an organization to revisit

its application dependencies on IIS functionality after an upgrade and uninstall of any unneeded IIS modules.

Fewer components also means there are fewer settings to manage and problems to patch as it’s only necessary to maintain the subset of modules that are actually being used. This reduces downtime and improves reliability. Also, the IIS Management Console, with all its confusing tabs, has been replaced with a far more intuitive GUI tool, which makes it easier to visualize and understand how security settings are implemented. For example, if the component supporting basic authentication is not installed on your system, the configuration setting for it doesn’t appear and confuse matters.

So what components are likely to be needed to run a secure IIS? The first six listed below will be required by any website running more than just static pages, while seven and eight will be necessary for anyone needing to encrypt data between the server and client, while shared configuration is useful when you have a Web farm and want each Web server in the farm to use the same configuration files and encryption keys:

  1. Authentication includes integrated Windows authentication, client certificate authentication and ASP.NET forms-based authentication, which lets you manage client registration and authentication at the application level, instead of relying on Windows accounts. 
  2. URL Authorization, which integrates nicely with ASP.NET Membership and Role Management, grants or denies access to URLs within your application based on user names and roles so you can prevent users who are not members of a specific group from accessing restricted content. 
  3. IPv4 Address and Domain Name Rules provide content access based on IP Address and Domain Name. The new property “allowUnlisted” makes it a lot easier to prevent access to all IP addresses unless they are listed. 
  4. CGI and ISAPI restrictions allow you to enable and disable dynamic content in the form of CGI files (.exe) and ISAPI extensions (.dll). 
  5. Request filters incorporate the functionality of the UrlScan tool restricting the types of HTTP requests that IIS 7 will process by rejecting requests containing suspicious data. Like Apache’s mod_rewrite, it can use regular expressions to block attacks or modify requests based on verb, file extension, size, namespace and sequences. 
  6. Logging now provides real-time state information about application pools, processes, sites, application domains and running requests as well as the ability to track a request throughout the complete request-and-response process. 
  7. Server Certificates 
  8. Secure Sockets Layer 
  9. Shared Configuration

Other features that enhance the overall security of IIS 7 are new built-in user and group accounts dedicated to the Web server. This enables a common security identifier (SID) to be used across machines, which simplifies access control list management, and application pool sandboxing. Server administrators meanwhile have complete control over what settings are configurable by application administrators, while allowing them to make any configuration changes directly in their application without having administrative access to the server.

IIS 7 is quite a different beast as compared with previous incarnations, and that’s a good thing. It has been designed and built along classic security principles and it gives Windows-based organizations a Web server that can be more securely configured and managed than ever before. There may still not be enough from a security perspective to sway Linux and Apache shops to change to IIS anytime soon, but Microsoft has definitely narrowed the security gap between them. It will take administrators a while to get use to the new modular format and administrative tools and tasks. The training and testing time will be worth it though as it is an OS and framework that administrators are familiar with.

 

 

 

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Password security in SQL Server part 2

Posted by Alin D on June 20, 2011

In our first part we have discussed about the SQL Server Password Security which is the first part of securing SQL Server 2008 R2.

Encrypting Client Connection Strings

 

While using Windows authentication is the best way to connect to the database server, this isn’t always possible because the client machine that is connecting to the database server may not be connected do the Windows Domain.

This is most often the case when the web server is located in a DMZ network and the database server is located within the internal network as shown in Figure below.


In a case like this, the application development team should take extra care to secure the web server’s connection string. Without this extra protection, someone could break into the web server and find the database server’s connection information sitting in the web.config file and simply log into the database using the username and password, which are stored in plain text in the configuration file. One great technique to do this is to have the web application on startup read the web.config file looking for an unencrypted connection string. Then read that string into memory, delete that node from the web.config file’s XML, and then add a new node labeled as being the encrypted string, encrypt the string, and place the encrypted string within the XML document, saving it when done. On subsequent loads of the XML file, the unencrypted connection string would not be found, and the application would then load the encrypted version, decrypting it in memory, thereby making it much, much harder for someone who has broken into the SQL Server to find any useful connecting string information. If you don’t want to give the web application access to write to the web.config file (as this would technically be a security hole unto itself), the application team could create a small standalone app that takes the normal connection string and outputs an encrypted value, which the SA could then put within the web.config file during deployment of the application by the SA team.

SQL Reporting Services

SQL Reporting Services does an excellent job of protecting the connection information to the repository databases, as well as the connection strings that the reports use to connect to the source databases. All database connection strings that are used by SQL Reporting Services are encrypted and stored within the web.config as the encrypted string. Within the SQL Server Reporting Services database, typically named ReportServer, all the connection information that the reports use to connect to the

source databases is also stored as an encrypted value. Both of these encrypted values together form a very secure platform that makes it very difficult for an attacker to exploit the SQL Server Reporting Services platform to get any useful information from the database holding the Reporting Server catalog database, or the source data; getting access to the source data via the data stored within the SQL Server Reporting Service repository would require decrypting two layers of information.

 

Application Roles

When using Windows Authentication, there is an unfortunate side effect that needs to be considered: The user can now log into the database using any Open Database Connectivity (ODBC) based application such as Microsoft Access, Microsoft Excel, and SQL Server Management Studio, and they have the same rights that they would have if they were logged in via the application. If the user logs into the database by supplying the SQL Login username and password, this name risk is there. However, if the application contains the username and password hard coded within the application, then the user won’t have this ability as they will not have the username and password. This is probably something that you don’t want to happen. Before you go and switch all your applications to using SQL Authentication and hard coding the password within the application, there’s another solution that gives you the best of both worlds. This solution is to use an application role.

The application role is not a very well-understood, and therefore not very frequently used, security feature of Microsoft SQL Server, which allows a user to authenticate against the Microsoft SQL Server Instance, but not have any specific rights

within the database. The rights to perform actions are granted to the application role, which would then need to be activated by the application before the user would be able to perform any actions.

Application roles are created by using the sp_addapprole system stored procedure in SQL Server 2000 and below or by using the CREATE APPLICATION ROLE statement in SQL Server 2005 and above. The application role has its own password that is used to ensure that only authorized applications are able to activate the application. The application role is activated by using the sp_setapprole system stored procedure, and then the application role is deactivated by using the sp_unsetapprole system stored procedure, or by simply closing the connection to the database engine.

Here is sample code using the sp_addapprole system stored procedure and CREATE APPLICATION ROLE statement to create an application role:

EXEC sp_addapprole @rolename=’MyAppRole’, @password=’MyPa$$word’

CREATE APPLICATION ROLE MyAppRole WITH PASSWORD=’MyPa$$word’

The sp_setapprole system stored procedure has four parameters that are of interest. The first and second parameters are the @rolename and @password parameters to which you supply the name and password that were specified when you created the

application role. The third parameter is the @fCreateCookie parameter, which is a bit parameter and tells the SQL Server if it should create a cookie when the application role is activated (I’ll explain the cookies in a moment). The fourth parameter is

the @cookie parameter, which is a varbinary(8000) and stores the cookie that was created if the @fCreateCookie parameter was set to 1.

The @cookie parameter stores a cookie much in the same way that your web browser stores cookies when you browse the web, so that it can correctly identify the session that was used to activate the application role. Thus, when the application role is

disabled, the SQL Server knows which session state to return the user’s session to. If you don’t plan to unset the application role and will simply close the connection to the SQL Server, then you don’t need to set a cookie and can simply set the @fCreate Cookie password to 0 telling the SQL Server to not create the cookie.

In the sample code shown in next example, I created a new database, and then we create an application role within that database. We then create a table within the database, as well as a user within the database. We next give the application role access to select data from the table. We then use the EXECUTE AS statement to change your execution context from that of our user to that of the user, which we just created and has no rights. Next we query the table, which returns an error message to us. After that we switch to using the application role and try and query the table again, this time receiving the output as a recordset. We then unset the application role using the cookie that was created by the sp_setapprole system stored procedure.

We then use the REVERT statement so that we are no longer executing code as our MyUser database use, after which we drop the sample database.

USE master

GO

IF EXISTS (SELECT * FROM sys.databases WHERE name =’AppRoleTest’)

DROP DATABASE AppRoleTest

GO

CREATE DATABASE AppRoleTest

GO

USE AppRoleTest

GO

CREATE APPLICATION ROLE MyAppRole WITH PASSWORD=’MyPa$$word’

GO

CREATE TABLE MyTable

(Col1 INT)

GO

CREATE USER MyUser WITHOUT LOGIN

GO

GRANT SELECT ON MyTable TO MyAppRole

GO

DECLARE @cookie varbinary(8000)

EXECUTE AS USER = ‘MyUser’

SELECT * FROM MyTable

EXEC sp_setapprole @rolename=MyAppRole, @password=’MyPa$$word’, @cookie=@cookie OUTPUT, @fCreateCookie=1

SELECT * FROM MyTable

EXEC sp_unsetapprole @cookie=@cookie

REVERT

GO

USE master

GO

DROP DATABASE AppRoleTest

GO

 

When we run this script as shown in text output mode from within SQL Server Management Studio, we see the output shown in next image. The first SELECT statement that we issued was rejected because the user didn’t have rights to the table dbo.MyTable in the AppRoleTest database. However, the second SELECT statement that we issued after we set the Application Role was accepted by the database, and the contents of the table were returned.

You can now see how use of the application role can enable the use of the very secure Windows authentication without requiring that the user’s Windows account actually have rights to access any objects within the database directly, but the application

can run once the application role has been activated.

 

Another technique that can be used along the same lines of the application role is to create a user with no attached login
and use the EXECUTE AS statement to run commands as that user. While this will allow you to run all your

statements without the user needing to have rights to the database objects, the problem with this technique is that any logging that is done via the username functions returns the dummy user that you created and not the login of the actual

user. This is shown along with sample code in next image As you can see in the sample code, we create a dummy user, then output my username using the SUSER_SNAME() system function, then switch to running under the context of the MyUser

database user, and then output the value of the SUSER_SNAME () function again with the output being the SID of the MyUser database user account. You can’t even query the dynamic management views to get the correct username of the user

logged in, because once the EXECUTE AS has been executed, the dynamic management views show the SID of the user instead of the name of the login that was originally connected to the database.

When using an application role, you don’t have the database username return problem when using the system functions or the dynamic management views.

 

How to use Windows Domain Policies to enforce password length

 

Starting with Microsoft SQL Server 2005, Microsoft introduced a new level of password security within the product, as this was the first version of Microsoft SQL Server that could use the domain policies to ensure that the passwords for the SQL

Authentication accounts were long enough and strong enough to meet the corporate standards as set forth by the SAs. By default, all SQL Authentication accounts created within the SQL Server instance must meet the domain password security policies. You can, if necessary, remove these restrictions by editing the SQL Authentication account. Within the Microsoft SQL Server, two settings can be applied to each SQL Authentication Login, which are shown in next screenshot.

  1. The “Enforce password policy” setting tells the SQL Server engine to ensure that the password meets the needed complexity requirements of the domain, and that the password hasn’t been used within a specific number of days, which is defined within the domain policy, and is explained later in this article.


  2. The “Enforce password expiration” setting tells the SQL Server that the password for the SQL Authentication Login should have expired based on the domain settings. The “User must change password at next login” option, shown disabled in the image above, will only become available when the logins password is manually reset and the “Enforce password policy” setting is enabled for the login. Allowing the SQL Server to ensure that your passwords meet your domain policies has some distinct advantages, especially when it comes to auditing. Without this ability you would need to physically check each SQL server password to ensure that It meets the corporate standards when the Auditor asks you if all your SQL Authentication passwords meet the corporate standards. In a worst case situation, this would require that you either keep a list of all the usernames and passwords somewhere (which would probably cause you to fail the audit) or you would need to contact each person that uses the SQL Authentication login and ask them how long the password is, and if it meets the company policies, and so on. Now with this feature built into the product, a quick and simple SQL query is all that it takes to verify the information.

Querying the sys.sql_logins catalog view will show you any logins that may not meet the domain password policies:

SELECT name, is_policy_checked FROM sys.sql_logins

 

While the T/SQL shown in next example works great for a single SQL Server, if there are dozens or hundreds of SQL Servers that need to be verified, a T/SQL script may not be the best way to check all those servers. In this case a Windows PowerShell script may be more effective. Within the Windows PowerShell script shown in next example, the SMO (Server Management Object) is used to get a list of all the available instances on the network.

After this list has been returned from the network, SMO is used to return the SQL Logins along with the value of the PasswordPolicyEnforced setting.

Example: “Using SMO to return the PasswordPolicyEnforced setting for all SQL Logins for all SQL Server Instances available on the network.”

[System.Reflection.Assembly]::LoadWithPartialName(‘Microsoft.SqlServer.Smo’) j out-null

foreach ($InstanceList in [Microsoft.SqlServer.Management.

Smo.SmoApplication]::EnumAvailableSqlServers())

{

$InstanceList;

$instanceName ¼ $InstanceList.Name;

$instanceName;

$SMOserver ¼ New-Object (‘Microsoft.SqlServer.Management.

Smo.Server’) $instanceName

$db ¼ $SMOserver.Logins j where-object {$_.loginType -eq

“sqllogin”} j select name, PasswordPolicyEnforced

$db;

}

 

By setting the is_policy_checked flag to true (shown as the number 1 when you run the sample query), this tells you that any password that is assigned to the SQL Authentication Login must meet the password requirements of the domain. Expanding on the query shown in Example “By setting the is_policy_checked flag, this tells you that any password that is assigned to the SQL Authentication Login must meet the password requirements of the domain. Expanding on the query shown in the example above SQL Server Reporting Services report could be configured that runs against each SQL Server in the environment, giving a simple report that can be run as needed for auditing purposes.

When you have the is_policy_checked flag set to true, there are several domainwide settings that will be evaulated each time the password is changed. These policies can be found by editing the Group Policy Object (GPO) on the domain that holds these, an SQL Server Reporting Services report could be configured that runs against each SQL Server in the environment, giving a simple report that can be run as needed for auditing purposes.

When you have the is_policy_checked flag set to true, there are several domainwide settings that will be evaulated each time the password is changed. These policies can be found by editing the Group Policy Object (GPO) on the domain that holds these

settings, or by editing the local security policies for the server in question if that server is not a member of a Windows domain. While you can set these settings on a server that is a member of the domain, doing so won’t have any effect as the domain policies but will overwrite any local settings you have set.

If all the SQL Server Instances that need to be polled are registered within SQL Server Management Studio, this select statement can be run against all the instances at once returning a single record with all the needed information. This can be done by opening the registered servers panel within SQL Server management studio by clicking on the View dropdown menu and then the “Registered Servers” menu item. Right click on the folder that contains the SQL Server Instances you want to execute the query against and select “New Query” from the context menu that opens. This opens a new query window which, when executed, will execute the query against all the servers that are within the registered servers folder with all the data from all the servers being returned as a single recordset.

SQL Server Management Studio will automatically add in a new column at the beginning of the recordset, which contains the name of the instance; this will allow you to use the same query shown in Example 3.4 against all the SQL Servers at once and

giving back a single recordset that can be reviewed or handed off as needed.

Windows Authentication Group Policies

There are a total of six policies that you can set within Windows that affect the domain or local password policy. However, Microsoft SQL Server only cares about five of them. The policy with which the SQL Server is not concerned is the “Store

passwords using reversible encryption” policy. This policy tells Windows if it should store the user’s password using a two-way encryption process, instead of a one-way hash. Enabling this policy presents a security vulnerability on your domain as an

attacker could download the list of all users and passwords, then break the encryption on the passwords and have full access to every user’s username and password. Due to the security issues with this setting, the setting is disabled by default and should

remain so unless there is a specific reason to enable it. The typical reasons to enable it include using Challenge Handshake Authentication Protocol (CHAP) through Remote Access or Internet Authentication Services (IAS). It is also required if one or

more Internet Information Service (IIS) servers within the Windows Domain are using Digest Authentication.

The five password policies that the SQL Server does recognize and follow are the following:

  1. Enforce password history;
  2. Maximum password age;
  3. Minimum password age;
  4. Minimum password length;
  5. Password must meet complexity requirements.

Each of these settings has a specific effect on what the passwords can be set to and should be fully understood before changing the password of an SQL Authentication Login. The “Enforce password history” setting on the domain (or local computer) is not a boolean, although the name sounds as though it would be. It is in fact the number of old passwords for the account that the SQL Server should track so that passwords cannot be reused. The setting has a valid range of 0 (or no passwords) to 24 passwords. The more passwords that are kept, the greater the chance that the user will forget their password, but the lesser the chance that someone will break into the system via an old password. The default on the domain is 24 passwords.

The “Maximum password age” setting tells the SQL Server howmany days a password is valid. After this number of days has passed since the last password change, the user will be prompted to change the password. If the password is not changed, the user will not be able to log into the database instance. This setting accepts a value from 0 (never expires) to 999 days, with a default value of 42 days. The “Minimum password age” setting tells the SQL Server how many days from the time a password has been changed until it can be changed again. This setting prevents the user from rapid-fire changing their passwords to eat up the number of passwords specified by the “Enforce password history” setting. Without this setting, or with this setting set to 0, when the user’s password expires, the user can simply change the password 24 times and then change it to the same password that it was before effectively breaking the password requirement feature. This setting accepts a value from 0 (allows immediate password changes) to 998 days, with a default value of 1; however, this setting has a practical upper limit of one day lower than the setting for the “Maximumpassword age.” If you were to set this setting to the same value or higher than

the “Maximumpassword age” setting, then the users wouldn’t ever be able to login until after their passwords had expired. The “Minimum password length” setting tells the SQL Server how many characters need to be in the password for the password

to be acceptable. This setting can be any value from 0 (allowing a blank password) to 14 characters, with a default value of 7 characters. It is typically recommended to increase this value from the default of 7 to a higher number such as 9 characters.

While this will make the password harder for the user to remember, it will also make it exponentially harder for an attacker to guess. The “Password must meet complexity requirements” setting tells the SQL Server that all passwords must be considered

“strong” passwords. There are several requirements to having a strong password beyond what one would normally consider. By default this setting is enabled.

1. The password cannot contain the username within it.

2. The password must be at least six characters in length.

3. The password must contain characters from at least three of these four categories:

a. Lower-case letters (a through z);

b. Upper-case letters (A through Z);

c. Numbers (0 through 9);

d. Symbols ($, #, @, %, ^ for example).

When you enable the “Enforce password policy” setting for an SQL Authentication Login, this enforces the “Enforce password history,” “Minimumpassword length,” and “Password mustmeet complexity requirments” settings against that login. When you enable the “Enforce password expiration” setting for an SQL Authenticaiton Login, this enforces the “Maximum password age” and the “Minimum password age” settings against that login. In order to enable the “Enforce password expiration”

setting against an SQL Authenticaiton login, you must also enable the “Enforce password policy” setting. However, you do not need to enable the “Enforce password expiration” setting if you enable the “Enforce password policy” setting.

When working on an SQL Azure database, the login mustmeet the password complexity settings that Microsoft has defined. As of the summer of 2010, this means that the password must be 8 characters in length, and meet the complexity requirements

shown above. There is no way to configure a login to an SQL Azure instance to not meet these requirements, for the SQL Azure instances do not support using the check_policy parameter to disable the policy checking.

Summary

One of the biggest problems in today’s IT world is that once you have created your nice secure passwords, how do you track them? Those usernames and passwords are probably going to be documented somewhere, typically within an Excel sheet that is kept on a network share so that all the database administrators within the group have quick and easy access to them. However, by doing this you now have placed all the passwords that you have taken the time to ensure that are strong and secure within your web.config and app.config files are easily readable and usable by anyone who has access to the network share. Typically, not just the database administrators would have access to the network share. In addition to the database administrators, the SAs, backup software, and monitoring system would all have access to the network share. And this is in addition to whoever has found the lost backup tape for your file server. In other words, be sure to store that password list in a nice, safe place and not in the public arena available to everyone to read and network share.

 

Posted in SQL | Tagged: , , , , , , | Leave a Comment »

How transparent data encryption works in SQL Server 2008

Posted by Alin D on June 19, 2011

Transparent Data Encryption (TDE) is a new feature in SQL Server 2008 designed to encrypt your database files, database backups and temporary database (tempdb). As you request data from your database, it will be decrypted in real time, and TDE will not prevent any user authorized to enter your database from accessing and reading your tabular data.

Transparent data encryption – why use it?

The PCI DSS (Payment Card Industry Data Security Standard) requires that each of your databases and backups are secured, and TDE is intended primarily to help companies running SQL Server 2008 meet the terms of those compliance guidelines.

Keep in mind that TDE won’t satisfy all of your security or compliance requirements on its own. It is instead part of a suite of features provided by SQL Server 2008 to help DBAs achieve compliance. The DBA will still need to ensure that sensitive data is encrypted by the encryption algorithms, and network and system administrators must ensure that the Windows servers, network and link between the Web and application servers are secure. Developers are still responsible for making sure that communication from the client to the Web server is secure or encrypted.

Considerations before using TDE

Before you implement transparent data encryption on your SQL Server you should consider several factors.

For example, any company using TDE in SQL Server may notice a slight performance degradation as data is encrypted while being written to disk, and decrypted when being read from the disk. This hit is mainly due to increased CPU requirements. The data file, transaction log and backups will be the same size as with a database that does not have TDE enabled.

Database compression ratios for encrypted database backups are far more cost-effective for unencrypted backups. This may require increased storage requirements for your backups, and an added fee may be incurredif you are transferring those encrypted backup files offsite.

Database compression ratios for encrypted database backups are much less when compared to those of unencrypted backups. This may require increased storage requirements for your backups, and added costs if you are transferring those encrypted backup files offsite.

While securing backups can also be done natively in SQL Server via a password, this is considered a weak option. Most tape backup solutions now include encryption on the fly while writing to tape devices. While in the past this technology was slow, there have been considerable advances in tape encryption over the past few years. Still, these developments will not prevent a hacker from accessing your SQL Server, nor will they hinder their efforts to detach your database files, copy them to another SQL Server, attach them and read your database contents. Database file encryption is required by most compliance regulations.

Below are some other important factor to take into account before implementing transparent data encryption:

  • Using TDE requires a database encryption key (DEK) and any certificate that you may have used for the DEK. You will need this key when restoring your backups.
  • If you are using TDE, instant file initialization is disabled. Instant file initialization is a feature of Windows Server 2003 that SQL Server 2005 can take advantage of where database growth times are extremely fast, as the underlying space in the file system does not need to be zeroed out. If you are log shipping or database mirroring a transparent data encryption database, TDE will need to be enabled on the secondary, ormirror server.
  • FILESTREAM data will not be encrypted. FILESTREAM is a feature of SQL Server 2008 where varbinary columns can be stored in the file system and asynchronously streamed to the client.
  • Read-only file groups in your database will have to be made writable to enable TDE to encrypt the database contents. They can then be made read-only again.
  • Enabling a database for transparent data encryption may take some time, and some database operations will not be enabled during this conversion period. Consult Microsoft’s page on understanding TDE for more information on what these limitations are.
  • Replication is “TDE unaware”, and replicated data will not be encrypted. In other words, replication network traffic will be plain text as always, as will the replication snapshot files. The DBA will need to account for this in the compliance effort.
  • Full-text indexing will extract textual data from varbinary and image columns into the file system momentarily during the index process. This data will be plain text and not encrypted. Microsoft recommends that you do not full-text index data stored in the varbinary/image columns.

Enabling transparent data encryption in SQL Server 2008

To enable TDE you will fist need to create a Service Master Key (SMK). To do this, use the following statement in your master database:

Create Master Key Encryption By Password = 'MyPassword'

You will then need to protect the DEK with a certificate which you will be able to transfer to another server should you need to restore the TDE protected database there. You can achieve this by using the following statement:

CREATE CERTIFICATE MyCertificate WITH SUBJECT = 'My Certificate'

You will then need to backup the certificate into the file system, along with the private key. Ensure that you keep both of these files in a secure, known location. If you loose these files you will be unable to restore your database and read its contents.

BACKUP CERTIFICATE MyCertificate TO FILE = 'c:tempMyCertificateBackup.bck'
WITH PRIVATE KEY (
FILE = 'c:TempMyPrivateKey.key',
ENCRYPTION BY PASSWORD = 'MyPassword');

You will now need to create a database encryption key encrypted with the above certificate.

CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MyCertificate

Now you can enable transparent data encryption on your database by using the following command:

ALTER DATABASE myDatabase SET ENCRYPTION ON

Finally, you can monitor the progress or state of the encryption conversion by querying the following DMV:

Select db_name(database_id), encryption_state fromsys.dm_database_encryption_keys

The important thing to remember about transparent data encryption for SQL Server is that it’s not a one stop encryption solution. It also does not encrypt sensitive data in your database, but rather the data files and backups. You will still need to protect sensitive data by encrypting individual columns to only allow authorized people to view them.

 

 

Posted in SQL | Tagged: , , , , , , | Leave a Comment »

Failover clustering, network load balancing drive high availability

Posted by Alin D on January 25, 2011

Most of your customers know business productivity and revenues can be drastically affected if a mission-critical server, application or service fails. Indeed, one of the main objectives for IT departments everywhere is providing high availability for mission-critical resources. Toward that goal, service providers can implement high-availability alternatives in Windows Server 2008 to mitigate server outages for their Windows shop customers.

The first step in designing a Windows-based high-availability solution entails understanding the two main high-availability alternatives available with Windows Server 2008; failover clustering and network load balancing. These options tackle high availability in different ways.

Failover clustering

At the macro level, a Windows Server 2008 failover cluster provides high availability by eliminating the threat of a single point of failure for a server, application or service. Normally, if a server with a particular application or service crashes, the application or service is unavailable until an administrator manually rectifies the problem. But if a clustered server crashes, another server within the cluster will automatically take over the failed server’s application and service responsibilities without intervention from an administrator or impact on operations.

Windows Server 2008 supports the shared-nothing cluster model, in which two or more independent servers, or nodes, share resources; each server owns and is responsible for managing its local resources and provides nonsharing services. In case of a node failure, the disks, resources and services running on the failed node fail over to a surviving node in the cluster. For example, if an Exchange server is operating on node 1 of the cluster and it crashes, the Exchange application and services will automatically fail over to node 2 of the cluster. This model minimizes server outage and downtime. Only one node manages one particular set of disks, cluster resources and services at any given time.

When designing and implementing failover clusters, service providers need to ensure the following preconditions: that each server’s hardware specifications are identical, that a shared storage server such as a SAN or NAS is in place, and that a dedicated network for heartbeat communication between server nodes is available. In addition, all hardware and software drivers associated with the cluster must be certified by Microsoft, and the customer must use either the Enterprise or Data Center Edition of Windows Server 2008. Those editions support as many as 16 nodes in a single failover cluster implementation.

Network load balancing
Network load balancing (NLB), Windows Server 2008’s other high-availability alternative, enables an organization to scale server and application performance by distributing TCP/IP requests to multiple servers, also known as hosts, within a server farm. This scenario optimizes resource utilization, decreases computing time and ensures server availability. Typically, service providers should consider network load balancing if their customer situation includes, but is not limited to, Web server farms, Terminal Services farms, media servers or Exchange Outlook Web Access servers.

Above and beyond providing scalability by distributing TCP/IP traffic among servers participating in a farm, NLB also ensures high availability by identifying host failures and automatically redistributing traffic to the surviving hosts.

Network load balancing is native to all editions of Windows Server 2008. Unlike failover clustering, NLB does not require any special hardware, and a network load balancing server farm can include as many as 32 nodes. When designing and implementing NLB server farms, it’s common to start off with two servers for scalability and high availability and then add additional nodes to the farm as TCP/IP traffic increases.

Clearly, failover clustering and network load balancing with Windows Server 2008 provide service providers with options when designing and implementing high availability for their customers’ mission-critical servers and applications. Through the use of failover clustering and network load balancing, customers will gain an increase in server availability to mission-critical servers, a decrease in downtime during routine maintenance, a decrease in server outages, and a minimization of end-user outages during a failover.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

Active Directory Rights Management Services (AD RMS)

Posted by Alin D on January 19, 2011

Active Directory Rights Management Services (AD RMS) is an information protection technology that works with AD RMS-enabled applications to help safeguard digital information from unauthorized use. Content owners can define who can open, modify, print, forward, or take other actions with the information.

Introduction

Your organization’s overall security strategy must incorporate methods for maintaining security, protection, and validity of company data and information. This includes not only controlling access to the data, but also how the data is used and distributed to both internal and external users. Your strategy may also include methods to ensure that the data is tamperresistant and that the most current information is valid based on the expiration of outdated or time-sensitive information.
AD RMS enhances your organization’s existing security strategy by applying persistent usage policies to digital information. A usage policy specifies trusted entities, such as individuals, groups of users, computers, or applications. These entities are only permitted to use the
information as specified by the rights and conditions configured within the policy. Rights can include permissions to perform tasks such as read, copy/paste, print, save, forward, and edit. Rights may also be accompanied by conditions, such as when the usage policy expires for a
specific entity. Usage policies remain with the protected data at all times to protect information stored within your organization’s intranet, as well as information sent externally via e-mail or transported on a mobile device.

AD RMS Features

An AD RMS solution is typically deployed throughout the organization with the goal of protecting sensitive information from being distributed to unauthorized users. The addition of AD RMS–enabled client applications such as the 2007 Office system or AD RMS–compatible server roles such as Exchange Server 2007 and Microsoft Office SharePoint Server 2007 provides an overall solution for the following uses:

Enforcing document rights

Every organization has documents that can be considered sensitive information. Using AD RMS, you can control who is able to view these sensitive files and prevent readers from accessing selected application functions, such as printing, saving, copying, and pasting. If a group of employees is collaborating on a document and frequently updating it, you can configure and apply a policy that includes an expiration date of document rights for each published draft. This helps to ensure that all
involved parties are using only the latest information—the older versions will not open after they expire.

Protecting e-mail communication

Microsoft Office Outlook 2007 can use AD RMS to prevent an e-mail message from being accidentally or intentionally mishandled. When a
user applies an AD RMS rights policy template to an e-mail message, numerous tasks can be disabled, such as forwarding the message, copying and pasting content, printing, and exporting the message.

Depending on your security requirements, you may have already implemented a number of technologies to secure digital content. Technologies such as Access Control Lists (ACLs), Secure Multipurpose Internet Mail Extensions (S/MIME), or the Encrypted File System (EFS) can all be used to help secure e-mail and company documents. However, AD RMS still provides additional benefits and features in protecting the confidentiality and use of the data stored within the documents.

Active Directory Rights Management Services Components

The implementation of an AD RMS solution consists of several components, some of which are optional. The size of your organization, scalability requirements, and data sharing requirements all affect the complexity of your specific configuration.

Figure 1

AD RMS Root Cluster

The AD RMS root cluster is the primary component of an RMS deployment and is used to manage all certification and licensing requests for clients. There can be only one root cluster in each Active Directory forest that contains at least a single Windows Server 2008 server that runs the AD RMS server role. You can add multiple servers to the cluster to be used for redundancy and load balancing. During initial installation, the AD RMS root cluster performs an automatic enrollment that creates and signs a server licensor certificate (SLC). The SLC is
used to grant the AD RMS server the ability to issue certificates and licenses to AD RMS clients. In previous versions of RMS, the SLC had to be signed by the Microsoft Enrollment Service over the Internet. This required Internet connectivity from either the RMS server or from another computer to be used for offline enrollment of the server. Windows Server 2008 AD RMS has removed the requirement to contact the Microsoft Enrollment Service. Windows Server 2008 includes a server self-enrollment certificate that is used to sign the SLC locally. This removes the previous requirement for an Internet connection to complete the RMS
cluster enrollment process.

Web Services

Each server that is installed with the AD RMS server role also requires a number of Webrelated server roles and features. The Web Server (IIS) server role is required to provide most of the AD RMS application services, such as licensing and certification. These IIS-based services are called application pipelines. The Windows Process Activation Service and Message Queuing features are also required for AD RMS functionality. The Window Process Activation Service is used to provide access to IIS features from any application that hosts Windows Communication Foundation services. Message Queuing provides guaranteed message delivery between the AD RMS server and the SQL Server database. All transactions are first written to the message queue and then transferred to the database. If connectivity to the database is lost, the transaction information will be queued until connectivity  resumes.
During the installation of the AD RMS server role, you specify the Web site on which the AD RMS virtual directory will be set up. You also provide the address used to enable clients to communicate with the cluster over the internal network. You can specify an unencrypted URL, or you can use an SSL certificate to provide SSL-encrypted connections to the cluster.

Licensing-only Clusters

A licensing-only cluster is optional and is not part of the root cluster; however, it relies on the root cluster for certification and other services (it cannot provide account certification services on its own). The licensing-only cluster is used to provide both publishing licenses and use licenses to users. A licensing-only cluster can contain a single server, or you can add multiple servers to provide redundancy and load balancing. Licensing-only clusters are typically deployed to address specific licensing requirements, such as supporting unique rights management
requirements of a department or supporting rights management for external business partners as part of an extranet scenario.

Database Services

AD RMS requires a database to store configuration information, such as configuration settings, templates, user keys, and server keys. Logging information is also stored within the database. SQL Server is also used to keep a cache of expanded group memberships obtained from Active Directory to determine if a specific user is a member of a group. For production environments, it is recommended that you use a database server such as SQL Server 2005 or later. For test environments, you can use an internal database that is provided with Windows Server 2008; however, the internal database only supports a single-server root cluster.

How AD RMS Works

Server and client components of an AD RMS solution use various types of eXtensible rights Markup Language (XrML)–based certificates and licenses to ensure trusted connections and protected content. XrML is an industry standard that is used to provide rights that are linked to the use and protection of digital information. Rights are expressed in an XrML license attached to the information that is to be protected. The XrML license defines how the information owner wants that information to be used, protected, and distributed.

AD RMS Deployment Scenarios

To meet specific organizational requirements, AD RMS can be deployed in a number of different scenarios. Each of these scenarios offers unique considerations to ensure a secure and effective rights-management solution. These are some possible deployment scenarios:

■ Providing AD RMS for the corporate intranet
■ Providing AD RMS to users over the Internet
■ Integrating AD RMS with Active Directory Federation Services

Deploying AD RMS within the Corporate Intranet

A typical AD RMS installation takes place in a single Active Directory Forest. However, there may be other specific situations that require additional consideration. For example, you may need to provide rights-management services to users throughout a large enterprise with multiple branch offices. For scalability and performance reasons, you might choose to implement licensing-only clusters within these branch offices. You may also have to deploy an AD RMS solution for an organization that has multiple Active Directory forests. Since each
forest can only contain a single root cluster, you will have to determine appropriate trust policies and AD RMS configuration between both forests. This will effectively allow users from both forests to publish and consume rights-management content.

Deploying AD RMS to Users over the Internet

Most organizations have to support a mobile computing workforce, which consists of users that connect to organizational resources from remote locations over the Internet. To ensure that mobile users can perform rights-management tasks, you have to determine how to
provide external access to the AD RMS infrastructure. One method is to place a licensing-only server within your organization’s perimeter network. This will allow external users to obtain use and publishing licenses for protecting or viewing information. Another common solution
is to use a reverse proxy server such as Microsoft Internet Security and Acceleration (ISA) Server 2006 to publish the extranet AD RMS cluster URL. The ISA server will then handle all requests from the Internet to the AD RMS cluster and passes on the requests when necessary. This is a more secure and effective method, so it is typically recommended over
placing licensing servers within the perimeter network location.

Deploying AD RMS with Active Directory Federation Services

Windows Server 2008 includes the Active Directory Federation Services (AD FS) server role, which is used to provide trusted inter-organizational access and collaboration scenarios between two organizations. AD RMS can take advantage of the federated trust relationship as a basis for users from both organizations to obtain RAC, use, and publishing licenses. In order to install AD RMS support for AD FS, you will need to have already deployed an AD FS solution within your environment. This scenario is recommended if one organization has AD RMS and the other does not. If both have AD RMS, trust policies are typically recommended.

Posted in Windows 2008 | Tagged: , , , , , , , , , , , , | Leave a Comment »

Best Practices to Speed Up Your Site in ASP.NET

Posted by Alin D on January 12, 2011

A sluggish website not only reduces your site’s ability to attract and retain visitors it also looks unprofessional and will not be attractive to advertisers. If your site is hosted on the cloud it is also an indication that the site is not properly optimized and will be consuming too many system resources which increases your hosting bill.

These 10 best practices will ensure that your site is running at full speed.

1. Caching

If someone put a gun to my head and gave me one way to improve a site’s performance it would be caching. Caching buffers your pages in the server’s memory and so avoids database/server roundtrips, resulting in faster response times and reduced server loads (and thus reduced hosting charges). If your site gets less than 1000 pageviews per day you probably wont see much speed improvement from caching. Your CMS should support caching (if not you should move on ASAP) and you should turn it on. For WordPress, WP Super Cache is the standard and still the best caching plugin.
Still not convinced? Take a look at the response times for a site where I was turning caching on and off, all the spikes in the graph are when I had caching turned off.

2.Use a Content Delivery Network (CDN)

CDN’s were previously very expensive to use with high monthly minimum’s but several new entrants (notably AWS CloudFront and RackSpace Cloud Files) have made it affordable for smaller sites to use CDNs. CDN’s work by caching files at different geographical (‘edge’) locations and therefore reducing the lag users who are geographically far away from the server experience. The larger the files, the greater the benefit of using a CDN, video files are a must but you should also consider hosting images and even css/js files on a CDN.

3. Place Scripts at The Bottom of the Page

Html pages load sequentially, so a reference to an external javascript files placed above the body tag will need to be loaded before the page’s content. This can make the page loading appear sluggish.
Even scripts such as javascript files for AJAX operations used on the page can be placed below the content. This will mean users cannot interact with the site content when it is first displayed, but research by Facebook showed that users prefer to see content as soon as possible even if it cannot be interacted with.
Scripts such as Google Analytics code should always be placed at the bottom of pages.

4. Use External CSS Files

Pages which are heavy with Html load a lot slower than pages which reference CSS for styles, positioning etc. For starters CSS is more compact than Html for positioning and styling page elements, furthermore if it is placed in an external file it will be cached on the user’s browser so that it will not need to be loaded for subsequent page loads in the same session.
You should always review the outputted source code (ie view the rendered page and then look at the source code by right clicking and selecting View Source or similar command) to look for Html which can be replaced by CSS.

5. Host Files on a Separate Domain

A quirk of most browsers is that they can only make a two simultaneous requests to a domain. If your page has several images and external files (such as CSS or javascript files) these must be queued and requested two at a time. Hosting images or files on a separate domain (or a subdomain) allows the browser to make most simultaneous requests and render the page quicker. This is also an additional reason to use a CDN as the files will always be on a separate domain.

6. Minimize Hits to the Database

Why is caching so effective? It reduces requests to both the server for processing and to the database for data. Database operations are very expensive in terms of resources and so you should review your code to minimize hits to the database. COnsider the following:

  • Ensure that you minimize the number of connections opened to a database in a visit. Once a database connection is opened, try to perform as many database operations as possible and then close the connection. Do not open and close connections several times unless this is necessary.
  • Open database connections as late as possible and close them as early as possible – ie don’t do additional processing that isnt necessary whilst the connection is open, grab the data, close the connection and then do additional processing.
  • Review your SQL code to ensure it is efficient – several SQL operations can be performed in one operation and it is not always necessary to execute separate insertand select statements. Ensure you don’t use  Select * in your queries – always select just the columns you need.

7. Optimize Images

Maybe it is just my perception, but I always remember this being first on the list of a site optimization checklist and now it hardly even features in a top twenty. I guess there are more interesting things to talk about, but optimizing images is still a major factor in reducing page loads. Use GIF where-ever possible, screenshots and logos are normally a must for GIF. GIF’s max colors in a file is 256 but you can normally reduce this without compromising quality and reduce the filesize. JPGs should be examined even more closely as they are normally larger files, you can set quality anywhere between 1-100 and normally around 65  is an acceptable compromise between quality and size. Similarly you should optimize PNG files for size vs quality. Photoshop’s Save For Devices tool is invaluable for this purpose.
Also, always use the height and width attributes of the <img> tag as this speed up loading, but do not scale and image using these attributes – these should be the actual image size.

8. GZip Everything

Gzip is a popular compression protocol and about 95% of all web traffic is on browsers that support Gzip decompression. Therefore it is safe to say the all your Http traffic delivered to the user’s browser should be Gzip compressed. Putting  Accept-Encoding: gzip, deflate in your http header informs Apache that the content is Gzip compressed and it will direct that to the user’s browser with an appropriate header to instruct the browser of the content type and to decompress it. Most site’s will Gzip http pages but css and js files should also be Gzip compressed.

9. Reduce the Number of Http Requests

Http requests are expensive as they require round trips to the web server, consider the following to minimize the number of requests:

  • Combine files – dont have several .css or .js files unless this is strictly necessary,  just combine these into a single large file.
  • Combine images – if you have several images for your site’s design consider combining the images into a single image file and then selecting a segment by using the CSSbackground-image and background-position properties. The CSS Sprites tutorial on A List Apart is a good tutorial.

10. Examine Your Final Page

This isnt really a separate best practice in itself, but it is probably the most effective way to find page bloat. Too often the optimization is done by reviewing all the sever-side files but when a page is generated there are often new items added to the header which werent noticed or were added by a script that was installed. Starting at the optimization at the final generated page is the best technique. This will also identify html bloat, such as empty tag (empty <span> and <div> tags are infamous for populating pages).

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Importance of SSL for Exchange Servers

Posted by Alin D on December 18, 2010

There have been many times in the past when I have started a project for a new customer and discovered that they are not using SSL for their email servers.  Usually after a brief discussion they agree to implement SSL in the new system we are installing for them.

Occasionally they agree but insist on doing it in a less than ideal manner.  And sometimes, although rarely, they decline our advice and continue without SSL.

What is SSL?

SSL stands for Secure Socket Layer and is an encryption protocol that secures communications between two parties over insecure networks such as the internet.  Although still commonly referred to as SSL its new name is actually TLS (Transport Layer Security) which more accurately describes its role of securing communications at the Transport layer of the OSI model (eg, the TCP protocol).

In an SSL/TLS secured communication the two parties (e.g. a web server and a web browser) agree on how to secure the connection they are establishing.The server sends the client its public encryption key (sometimes known as an SSL certificate) which the client then verifies against its own list of trusted certification authorities.  Once it has verified the key the client will generate a random number, encrypt it with the server’s public key, and send it to the server.  The public key encryption ensures that only the server can read the random number.

Contrary to popular assumption it is not the server’s public key (or SSL certificate) that is used for the encrypted connection, rather it is only used to secure the initial exchange of the random number.  The random number is then used to encrypt and decrypt the actual connection traffic.

Why is SSL important for Exchange Servers?

Exchange servers come with useful remote access features such as Outlook Web Access, Outlook Anywhere, and ActiveSync.  These features allow your users to access their email from any location with an internet connection by using a web browser, their laptop, or a mobile device such as a smartphone.

This convenience carries with it some security risks, the most obvious being the risk of password credentials being compromised.

Operating any of these remote access services without SSL means that the connection, including password credentials, occurs over an unsecured HTTP connection.  HTTP is the protocol that most websites use.  It is fast, stable, and works through just about any firewall.  But HTTP has no built in security.  Every bit of data sent over HTTP is unencrypted, so when passwords are sent over HTTP they are sent “in the clear”, vulnerable to network sniffers.

Because so much of this remote access occurs from untrusted locations such as free wireless hotspots, it is critical that SSL be used to protect this traffic.

Recommendations for using SSL

Here are some recommendations for using SSL to secure your Exchange server’s remote access features.

  • Make it mandatory, not optional.  If you enable SSL but also still allow unencrypted HTTP you make it possible for an unwitting user to connect over the insecure method.
  • Use it internally as well as externally.  It is tempting to allow non-SSL connections from locations within your own corporate network but this is still risky.  Some security professionals consider all network segments to be untrusted.
  • Use a commercial Certificate Authority instead of a private one.  You may be tempted to save money on SSL certificates by installing a private CA and issuing your own, but this causes more headaches than it is worth.  Your private CA will not be trusted by devices such as smartphones or non-corporate computers, and will result in SSL warning messages that confuse users and can make some applications refuse to connect at all.  Because the SSL warning messages are also often found with phishing sites like fake banking sites it is not a good idea to get your users used to ignoring them.

Posted in Exchange | Tagged: , , , , | Leave a Comment »

Setting the ASP Configuration in IIS7

Posted by Alin D on December 16, 2010

Configuring your ASP application environment in IIS 7 differs from the process used in previous versions of IIS. Microsoft has centralized the settings and made them easier to maintain. As previously mentioned, it’s important to set the ASP configuration at the proper level, rather than try to set the configuration globally and risk a security breach. To display the ASP settings for any level, select the level you want to use (Web server, Web site, or folder) in the Connections pane and doubleclick the ASP icon in the Features View. You’ll see the standard list of ASP configuration settings shown below.
IIS7  Manager

IIS7 Manager

You can divide the ASP settings into three functional areas: Behavior, Compilation, and Services. The settings you make at an upper level affect all lower levels unless you make a specific change at the lower level. For example, if you set code page 0 as the default at the Web server level, then all Web sites and their folders will also use code page 0 until you set another value at one of these levels. The following sections describe each of the three functional areas and tell how you can modify the associated settings to meet specific needs.

Changing the Application Behavior

The Behavior area modifies how the application interacts with the user. Changing a property here will modify the way the application performs its task. The following list describes each of the properties in this area and describes how you can work with them (the configuration name appears in parentheses behind the friendly name).

Code Page (codePage)  A code page is the set of characters that IIS uses to represent different languages and identities. English uses one code page, Greek another. Setting the code page to a specific value helps your application support the language of the caller. You can find a wealth of  information, along with all of the standard code page numbers, at http://www.windows-scripting.info/unicode/codepages.html .  IIS only understands the Windows code pages defined at http://www.windows-scripting.info/unicode/codepages.html#msftwindows .  The default setting of 0 requests the code page from the client, which may or may not be a good idea depending on the structure of your application. If you plan to support specific languages using different parts of your Web site, always set the code page to obtain better results.

Enable Buffering (bufferingOn)

Buffering is the process of using a little memory to smooth the transfer of data from the ASP application to the caller. Using this technique makes the application run more efficiently, but does cost some additional memory to gain the benefit. Generally, you’ll find that buffering is a good investment on any machine that can support it and should keep this setting set to True (the default state).

Enable Chunked Encoding (enableChunkedEncoding)

Chunked transfers convert the body of a Web page into small pieces that the server can send to the caller more efficiently than sending the entire Web page. In addition, the caller receives a little of the Web page at a time so it’s easier to see progress as the Web page loads. You can learn more about this HTTP 1.1 technology at http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html.  This value defaults to True.

Enable HTML Fallback (enableAspHtmlFallback)

Sometimes your server will get busy. If the server gets too busy to serve your ASP application, you can create an alternative HTML file that contains a static version of the ASP application. The name of the HTML file must contain _asp in it. For example, if you create an ASP application named Hello.ASP, then the HTML equivalent is Hello_asp.HTML. This value defaults to True.

Enable Parent Paths (enableParentPaths)

Depending on the setup of your Web server, you might want an ASP application to reference a parent directory instead of the current directory  using the relative path nomenclature of ..MyResource, where MyResource is a resource you want to access. For example, the ASP application may reside as a subfolder of a main Web site folder. You may want to access resources in that main folder. Keeping the ASP application in a  subfolder has security advantages because you can secure the ASP application folder at a stricter level than the main folder. In most cases, however, the resources for the ASP application reside at lower levels in the directory hierarchy. Consequently, this value defaults to False.

Posted in TUTORIALS | Tagged: , , , , | Leave a Comment »

Some Server Tools Your Organization Needs And are Free

Posted by Alin D on December 2, 2010

This list contain 10 free, essential tools is an amalgam of tools for all sizes of companies and networks. The range of tools covered here are generally cross-platform (i.e., they run on multiple OSes) but all are extremely useful to the system administrator, network administrator and first-level support personnel. While all of these tools are free to download and use in your network without payment of any kind to their developers or maintainers, not all are open source. The 10 essential tools listed here, in no particular order, are from various sources and represent the very best in tools currently used in large and small enterprises alike.

1. PSToolsPSTools

PSTools is a suite of useful command-line Windows tools that IT professionals consider essential to survival in a Windows-infested network. It provides automation tools that have no rival. There is no greater free toolset for Windows available anywhere. Microsoft provides this suite free of charge. If it’s not part of your Windows diagnostic and automation arsenal, stop reading and download it now. Be sure to come back and finish the list. You can multitask, can’t you?)

 

2. SharEnum

ShareEnum is an obscure but very useful tool. ShareEnum shows you all file shares on your network. Even better, it shows you their associated security information. This very small (94K) tool might become one of the most valuable and useful security tools that you possess. It is another free tool from Microsoft.

 

3. Nagiosnagios_html_m6f0536f

Nagios is an enterprise infrastructure monitoring suite. It’s free, mature and commercially supported. It has grown from a niche software project to a major force in contemporary network management. It’s used by such high-profile companies as Citrix, ADP, Domino’s Pizza, Wells Fargo, Ericsson and the U.S. Army.

 

 

4. Wireshark

If you run a network of any size or topology, Wireshark is a must-have application. It is a network packet capture and analysis program that assists you with your ongoing quest for a trouble-free network. Wireshark won’t prevent network problems, but it does allow you to analyze those problems in real time and possibly avoid failure.

5. Apache

The Apache project isn’t just a web server. The project, officially known as the Apache Software Foundation (ASF), consists of almost 100 different projects under the Apache umbrella. Yes, the famous and wildly popular HTTP server, Apache, is the project’s namesake and mainstay, but it isn’t the only nymph in the forest.

 

6. IP Plan

IP Plan is a little-known project that has potential in any size environment. It’s not a DNS service, but it is a web-based, IP tracking application. The reasoning behind a tool like IP Plan is that DNS tracks systems that are in use. But to whom do you go when an IP address conflict, and how do you know which IP addresses are free to use? You won’t — unless you have a tool like IP Plan. It’s easy to use and free. What more could you want?

 

7. Eclipse

Eclipse is an Integrated Development Environment (IDE), which you can use to create applications with almost any computer programming language. Eclipse has wide language support, but it is historically viewed as a Java development tool. You can develop Windows applications in this very complete IDE as well as applications for every current operating system.

 

8. KVM

Kernel Virtual Machine (KVM), now owned and supported by Red Hat, is a free, full virtualization solution. Full virtualization means hardware abstraction enables you to use almost any OS in a virtual machine. Each virtual machine has its own display, network, disk, and BIOS, and it functions like a physical system. You install an OS into a virtual machine just as you would to a physical system. Yes, even Windows.

 

9. OpenOffice.org

OpenOffice.org (OO.o) is the free equivalent of Microsoft’s popular office suite. OO.o sports a word processor, spreadsheet, presentation program, database and more. It is compatible with Microsoft Office and can use or export in almost every imaginable file format. OpenOffice.org is not only easy on the wallet (free), but it’s also the darling of IBM, which has created its own derivative: Lotus Smartsuite.

 

10. Webmin

Webmin, for the unitiated, is the ultimate lazy system administrator tool. It’s a web-based interface to your UNIX or Linux system that covers almost every configurable aspect of the system and any add-on program you can ponder. You can’t rely on it for 100 percent of your system administration tasks, but you can probably use it for 99 percent of them.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , | Leave a Comment »

How To Install an SSL Certificate using IIS 7

Posted by Alin D on November 4, 2010

To install an SSL in IIS , you first  need to issue a certificate for your web server. For this purpose you have to select the webserver root node in the navigation tree of the management console, and select the Server Certificates feature, as shown below:

After selecting Sever Certificates, the IIS management console lists all the server certificates installed on the web server (see below). The first thing to note is that  in IIS 7   you can install multiple server certificates on one web server, which can be used for multiple websites set up  on the web server (previous IIS versions allowed you to install only one server certificate per web server).

SSL Certificate IIS
In the Server Certificates feature details view in the IIS Management Console, the task pane on the right side  shows the necessary task(s) for installing server certificates. You can create a certificate request automatically that you can then use to requesting a new certificate at a CA. To create a new request, click the Create Certificate Request task link on the  pane,  this creates the same Base64-encoded request as  in previous versions of IIS. Use this Base64-encoded request file for submitting your request at the CA. After retrieving the certificate from the CA, you complete the running request by clicking the Complete Certificate Request  link. Thus you can both request and configure an SSL certificate for a standalone webserver. If you need to request an SSL  certificate for your own CA, use the Online Certification Authority wizard by clicking the Create Domain Certificate link. This certificate will then be configured in your own CA and will be used for signing certificates issued by this CA.

This process is quite laborious if you are a developer who just wants to test SSL with your own web apps. Therefore, IIS 7  ships with an additional option – creating a self-signed certificate for just your own machine. Just click the Create a Self-Signed Certificate link in the console and all you will need to specify  is a friendly name which will be displayed in the listing. The wizard creates a certificate by using the cryptographic functions of your local machine and automatically installs the certificate in your web server. It is important to note that these certificates should only be used for development and testing purposes, since only your browser running on your local machine will know the certificate, and therefore will show warnings that the certificate is invalid.

Once you have configured and installed the SSL  certificates, you can leverage these  for SSL-based communication in the sites configured on your IIS. To do this  you need to configure the protocol bindings for SSL, as well as the SSL options for any web apps within the web-sites.

Configuring Bindings for SSL in IIS

Bindings are used for making the content of websites available using specific protocols, IP addresses, and ports. In addition, the host headers for accessing multiple web apps through the same IP address and port are also configured in the bindings. To use SSL for apps configured within a website, you will need to configure a protocol binding for SSL for that website. To do this,  select your website in the navigation tree of the IIS Management Console and then select the Bindings link from the right hand side task pane. A dialog will appear which allows you to configure the bindings. Here, you  add the new bindings to make the contents available through different IP addresses, protocols,  and ports as shown below.  Click  Add to add new a binding to the website, and  Edit button to  modify existing bindings in the list.

SSL Certificate IIS

As you can see from the below screenshot, the protocol has been configured to https running on the default IP address for the server, and uses port 443 for SSL-based access (the default port for SSL). In addition, in the dropdown list you can select the certificate to be  used for SSL traffic on the website. Each certificate which you installed previously is available for selection in this listing, and you can set up different certificates for different websites on the server. After you have configured the SSL binding for your website, you can enable SSL for web applications within the website.

Encoding Information with SSL

SSL is enabled and configured for each individual site/app in IIS. Once you have configured the bindings at the website level, you can select the web app of your choice in the nav tree of the IIS Management Console and then activate the SSL   configuration as shown below:

SSL Certificate IIS
You can specify the requirement for SSL encoding for the chosen web app and whether to require client certificates to authenticate users. If you are using client certificate authentication you will need to configure the certificate mappings from certificates to the users that are eventually authenticated by IIS when retrieving the certificate. Configure these mappings in the web.config file.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , | Leave a Comment »