Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘Computing’

When and When not use the Windows Azure`s VM Role

Posted by Alin D on June 29, 2011

Windows Azure now includes a Virtual Machine role that allows organizations to host their own virtual machines in the cloud.

Microsoft introduced this role as a way to ease the migration of applications to cloud computing. So instead of waiting until your code is “cloud-ready”, you can use this role to move applications to the cloud while refactoring old code.

Where the VM role fits within Azure

Windows Azure currently has three roles: Web, Worker and Virtual Machine (VM). The Web role is used for web application programming on Internet Information Services (IIS) 7.0, while the Worker role is basically for any type of process that runs in the background with a front-end interface.

The VM role is the newbie, and it uses a virtual hard disk (VHD) image of a Windows 2008 R2 server. The image is created internally on your network using Hyper-V technology and then uploaded to Windows Azure. This image can be customized and configured to run whatever software you would like to run in the cloud.

Before pushing virtual machines out to the cloud, however, it’s important to understand the pricing, licensing and perquisites involved. Any instance of a VM role is priced by the compute hour, and licensing of the role is included in the cost

Compute Instance Size CPU Memory Instance storage I/O performance Cost per hour
Extra small 1.0 GHz 768 MB 20 GB Low $0.05
Small 1.6 GHz 1.75 GB 225 GB Moderate $0.12
Medium 2 x 1.6 GHz 3.5 GB 490GB High $0.24
Large 4 x 1.6 GHz 7 GB  1,000 GB High $0.48
Extra large 8 x 1.6 GHz 14 GB 2, 040 GB High $0.96

 This table can be found on Microsoft’s Azure Compute page.

All virtual machines are created using Hyper-V Manager on a Windows Server 2008 operating system, where R2 is recommended. You’ll also find that Hyper-V, IIS 7.0, Windows Azure SDK, ASP.NET are all required, with an optional install of Visual Studio 2010 also available. (More requirements for the Azure VM role are listed on MSDN.)

Where the VM can be used

So why would you want to implement the VM role? Well, let’s say you’ve done your due diligence and decided on Windows Azure as your cloud platform of choice. You are ready to move forward but have a lot of existing legacy applications that are written differently and may not work on the Azure platform. A rewrite of this code could have a lengthy roadmap even if you are utilizing agile programming. In my opinion, this is where the VM role should be used.

The VM role gives you complete control over the system where your code runs, so while you are rewriting code to work in Azure, you could also create and deploy customized VHD images to the cloud immediately. In other words, the VM role can be used to migrate an on-premise application to run as a service in the Windows Azure cloud.

Another ideal time to implement the VM role is when you aren’t sure if you want stay with Windows Azure for the long-term. What if you decide to change vendors? Windows Azure is a Platform as a Service (PaaS), which is simply a framework for developers to create applications and store data.

Basically, once you develop your product for Windows Azure, it runs on Windows Azure. But if your company takes a new direction and wants to leverage a different cloud platform from Amazon or VMware, guess what? You’ll have to recode because you won’t be able to move that application. The VM role acts as a bridge that connects PaaS with Infrastructure as a Service (IaaS); it gives Microsoft an IaaS platform and provides you with the portability to move vendors if a change of direction is needed.

When not to use the Azure VM role

While the use cases above make sense, to me a VM role in the cloud doesn’t seem like the best option for the long-term. For starters, if you push virtual machines to the cloud, you need good speeds to upload. So the bigger the VM, the longer that upload process will take. Secondly, Microsoft doesn’t maintain your virtual machines for you; you are responsible for patching and uploading the changes as a differencing disk.

When you look at it that way, maintaining a VM role for an extended period of time seems like a nightmare. Not only could the uptake be tremendous, but differencing disks are not my favorite virtual machine technology anyway as they are prone to corruption. Snapshot technology is much easier to deal with.

So while the Windows Azure VM role is good to have in the Azure platform, in my opinion it’s not a great long-term PaaS solution. What it can do is help bridge the gap while you are busy coding for a true Platform as a Service.

Posted in Azure | Tagged: , , , , , , , , , , , | Leave a Comment »

Things that you need to consider for cloud based email

Posted by Alin D on June 7, 2011

Developments in cloud based computing have shown quite a bit of excitement and promise, especially when it comes to small to medium sized businesses. Those who evangelize the cloud will often cite the many benefits of moving to a cloud based email service. The litany of favorable reasons to examine moving email services off site that are oft quoted fall into line with the reasons used to move to any new technology:

  • Ease of scalability
  • Ease of software updates
  • Email access anywhere
  • Better disaster recovery
  • Ease of implementation
  • And of course, reduced costs

So when a vendor, or even someone in your own organization, throw these at management looking to save money and increase productivity then it seems like the question moves from why should we move to the cloud? to why has it taken us so long to move our email to the cloud?

Is it really that easy?

Cloud based email services make a whole lot of sense for many organizations. By doing a bit of research, you are certain to find at least one case study on how moving your email to the cloud helped someone in your specific industry. Yet even with good reasons and plenty of research to support this decision, nothing should be done without considering every angle because over the years if we have learned one thing, when it comes to IT nothing is risk-free.

So what does an interested SMB need to consider when all the arrows point to moving to the cloud? Let’s take a look.

1. Control

When your email resides on servers that are housed at your location, you are responsible for configuring the software, maintaining the hardware, updating and patching the server(s), cooling the room, etc. But you also have complete control over your email and backups. Moving to the cloud means you are giving up control and possibly ownership. This lack of control can lead to real world problems. For instance, if your organization has a one year deletion policy, is your cloud provider able to adhere to that? Conversely, if you have a no delete policy can this be achieved as well?

A rarer occurrence, but one that has much harsher repercussions is the event that an investigation needs to take place. Will emails be available for forensics when needed? If so, will there be any issues with the chain of custody and proving that the investigation was tamper proof?

2. Availability

Unless you have been living under a rock you are well aware of the attacks against Gmail over the recent months. The decision to move email services to a cloud provider should always be based on how well the provider can ensure that mail servers will deliver an acceptable percentage of uptime. Of course it’s one thing to say that you guarantee 99.9999 percent uptime and quite another to deliver so when a cloud provider makes a claim regarding availability, make sure your IT team speaks with the sales engineers, not just the salesperson, to see what exactly is in place to eliminate things like interruptions and denial of service attacks.

3. Security and Spam Protection

One of the biggest draws to the cloud for email is the fact that the provider will take care of security and anti-spam. Again, this is something that you are entrusting to the provider and giving up control over. If you are unhappy with the amount of spam that gets by the filters, or if the false positive rate is higher than an acceptable rate you can’t simply switch to a different solution.

This should be at the forefront of any discussions you have with potential email service providers. Find out what solutions they have in place and research them just as if you were buying the protection for your own servers.

4. Cost

Of course cost is always the number one reason SMBs look to the cloud. It is hard to find anyone who will say that a cloud based solution isn’t less expensive in the long run than running, securing and maintaining your own email servers. However the numbers may not always equal the level of service you expect. Costs may not always be transparent. A cloud provider may charge extra for business grade anti-spam protection. Perimeter security or virus scanning may also require additional costs. Finally, storage is never a one size fits all solution so this will always present itself as a variable.

The cloud is definitely a solution worth looking into for a number of reasons, however as a smart business move it would be equally prudent to look at all of the considerations as well prior to signing any type of contract.

 

Posted in Exchange | Tagged: , , | Leave a Comment »

Virtual Machine role in Windows Azure

Posted by Alin D on February 17, 2011

Windows Azure now includes a Virtual Machine role that allows organizations to host their own virtual machines in the cloud.

Microsoft introduced this role as a way to ease the migration of applications to cloud computing. So instead of waiting until your code is “cloud-ready”, you can use this role to move applications to the cloud while refactoring old code.

Where the VM role fits within Azure
Windows Azure currently has three roles: Web, Worker and Virtual Machine (VM). The Web role is used for web application programming on Internet Information Services (IIS) 7.0, while the Worker role is basically for any type of process that runs in the background with a front-end interface.

The VM role is the newbie, and it uses a virtual hard disk (VHD) image of a Windows 2008 R2 server. The image is created internally on your network using Hyper-V technology and then uploaded to Windows Azure. This image can be customized and configured to run whatever software you would like to run in the cloud.

Before pushing virtual machines out to the cloud, however, it’s important to understand the pricing, licensing and perquisites involved. Any instance of a VM role is priced by the compute hour, and licensing of the role is included in the cost (see the table below).

Compute Instance Size CPU Memory Instance storage I/O performance Cost per hour
Extra small 1.0 GHz 768 MB 20 GB Low $0.05
Small 1.6 GHz 1.75 GB 225 GB Moderate $0.12
Medium 2 x 1.6 GHz 3.5 GB 490GB High $0.24
Large 4 x 1.6 GHz 7 GB 1,000 GB High $0.48
Extra large 8 x 1.6 GHz 14 GB 2, 040 GB High $0.96

Note:This same table can be found on Microsoft’s Azure Compute page.

Note:This same table can be found on Microsoft’s Azure Compute page.

All virtual machines are created using Hyper-V Manager on a Windows Server 2008 operating system, where R2 is recommended. You’ll also find that Hyper-V, IIS 7.0, Windows Azure SDK, ASP.NET are all required, with an optional install of Visual Studio 2010 also available. (More requirements for the Azure VM role are listed on MSDN.)

Where the VM role makes sense

So why would you want to implement the VM role? Well, let’s say you’ve done your due diligence and decided on Windows Azure as your cloud platform of choice. You are ready to move forward but have a lot of existing legacy applications that are written differently and may not work on the Azure platform. A rewrite of this code could have a lengthy roadmap even if you are utilizing agile programming. In my opinion, this is where the VM role should be used.

The VM role gives you complete control over the system where your code runs, so while you are rewriting code to work in Azure, you could also create and deploy customized VHD images to the cloud immediately. In other words, the VM role can be used to migrate an on-premise application to run as a service in the Windows Azure cloud.

Another ideal time to implement the VM role is when you aren’t sure if you want stay with Windows Azure for the long-term. What if you decide to change vendors? Windows Azure is a Platform as a Service (PaaS), which is simply a framework for developers to create applications and store data.

Basically, once you develop your product for Windows Azure, it runs on Windows Azure. But if your company takes a new direction and wants to leverage a different cloud platform from Amazon or VMware, guess what? You’ll have to recode because you won’t be able to move that application. The VM role acts as a bridge that connects PaaS with Infrastructure as a Service (IaaS); it gives Microsoft an IaaS platform and provides you with the portability to move vendors if a change of direction is needed.

When not to use the Azure VM role
While the use cases above make sense, to me a VM role in the cloud doesn’t seem like the best option for the long-term. For starters, if you push virtual machines to the cloud, you need good speeds to upload. So the bigger the VM, the longer that upload process will take. Secondly, Microsoft doesn’t maintain your virtual machines for you; you are responsible for patching and uploading the changes as a differencing disk.

When you look at it that way, maintaining a VM role for an extended period of time seems like a nightmare. Not only could the uptake be tremendous, but differencing disks are not my favorite virtual machine technology anyway as they are prone to corruption. Snapshot technology is much easier to deal with.

So while the Windows Azure VM role is good to have in the Azure platform, in my opinion it’s not a great long-term PaaS solution. What it can do is help bridge the gap while you are busy coding for a true Platform as a Service.

Posted in Azure | Tagged: , , , , , , | 1 Comment »

Setting the ASP Configuration in IIS7

Posted by Alin D on December 16, 2010

Configuring your ASP application environment in IIS 7 differs from the process used in previous versions of IIS. Microsoft has centralized the settings and made them easier to maintain. As previously mentioned, it’s important to set the ASP configuration at the proper level, rather than try to set the configuration globally and risk a security breach. To display the ASP settings for any level, select the level you want to use (Web server, Web site, or folder) in the Connections pane and doubleclick the ASP icon in the Features View. You’ll see the standard list of ASP configuration settings shown below.
IIS7  Manager

IIS7 Manager

You can divide the ASP settings into three functional areas: Behavior, Compilation, and Services. The settings you make at an upper level affect all lower levels unless you make a specific change at the lower level. For example, if you set code page 0 as the default at the Web server level, then all Web sites and their folders will also use code page 0 until you set another value at one of these levels. The following sections describe each of the three functional areas and tell how you can modify the associated settings to meet specific needs.

Changing the Application Behavior

The Behavior area modifies how the application interacts with the user. Changing a property here will modify the way the application performs its task. The following list describes each of the properties in this area and describes how you can work with them (the configuration name appears in parentheses behind the friendly name).

Code Page (codePage)  A code page is the set of characters that IIS uses to represent different languages and identities. English uses one code page, Greek another. Setting the code page to a specific value helps your application support the language of the caller. You can find a wealth of  information, along with all of the standard code page numbers, at http://www.windows-scripting.info/unicode/codepages.html .  IIS only understands the Windows code pages defined at http://www.windows-scripting.info/unicode/codepages.html#msftwindows .  The default setting of 0 requests the code page from the client, which may or may not be a good idea depending on the structure of your application. If you plan to support specific languages using different parts of your Web site, always set the code page to obtain better results.

Enable Buffering (bufferingOn)

Buffering is the process of using a little memory to smooth the transfer of data from the ASP application to the caller. Using this technique makes the application run more efficiently, but does cost some additional memory to gain the benefit. Generally, you’ll find that buffering is a good investment on any machine that can support it and should keep this setting set to True (the default state).

Enable Chunked Encoding (enableChunkedEncoding)

Chunked transfers convert the body of a Web page into small pieces that the server can send to the caller more efficiently than sending the entire Web page. In addition, the caller receives a little of the Web page at a time so it’s easier to see progress as the Web page loads. You can learn more about this HTTP 1.1 technology at http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html.  This value defaults to True.

Enable HTML Fallback (enableAspHtmlFallback)

Sometimes your server will get busy. If the server gets too busy to serve your ASP application, you can create an alternative HTML file that contains a static version of the ASP application. The name of the HTML file must contain _asp in it. For example, if you create an ASP application named Hello.ASP, then the HTML equivalent is Hello_asp.HTML. This value defaults to True.

Enable Parent Paths (enableParentPaths)

Depending on the setup of your Web server, you might want an ASP application to reference a parent directory instead of the current directory  using the relative path nomenclature of ..MyResource, where MyResource is a resource you want to access. For example, the ASP application may reside as a subfolder of a main Web site folder. You may want to access resources in that main folder. Keeping the ASP application in a  subfolder has security advantages because you can secure the ASP application folder at a stricter level than the main folder. In most cases, however, the resources for the ASP application reside at lower levels in the directory hierarchy. Consequently, this value defaults to False.

Posted in TUTORIALS | Tagged: , , , , | Leave a Comment »

Windbg Minidump Tutorial:Setting up & Reading Minidump Files

Posted by Alin D on December 15, 2010

This is a tutorial on how to set up and read your minidump files when you receive a BSOD (blue screen of death) in the attempts to gain further insight as to the cause of the problem. First thing is first. Download the latest debugging tools from the Microsoft site. Search for “debugging tools microsoft” in Google.

Then go to Start/Start Search. Type i
the command <i>cmd</i>.

Then change directories to:

C:Program FilesDebugging Tools for Windows (x86)

by using the command:

cd c:program filesdebugging tools for windows (x86)

It’s case insensitive when using the <i>cd</i> command.

Then type in:
windbg.exe -z c:windowsminidumpmini061909-01.dmp -c “!analyze -v”

Your minidump file is located at C:WindowsMinidumpMini062009-01.dmp. It’ll be in the form “MiniMMDDYY-01.dmp”.

KERNEL SYMBOLS ARE WRONG. PLEASE FIX SYMBOLS TO DO ANALYSIS

If somewhere in the output of the Bugcheck Analysis you see an error like:

***** Kernel symbols are WRONG. Please fix symbols to do analysis.

Then it’s most likely that you are using previous and incompatible symbols or corrupt files or you don’t have the proper symbols at the specified location when the Windbg program was trying to analyze the minidump file. So what I did was open up the Windbg program located at C:Program FilesDebugging Tools for Windows (x86) (in Vista and I believe it’s the same location for XP).

SETTING THE SYMBOL FILE PATH VIA WINDBG COMMAND LINE:

This is an important step so ensure that your symbol path file is set correctly lest you get the kernel symbols are WRONG error or other types of errors. Now set the Symbol File Path (File/Symbol File Path) to:

SRV*e:symbols*http://msdl.microsoft.com/download/symbols

However, for some reason I found that in order to set the Symbol File Path in the “File/Symbol File Path” field you cannot change it directly with the field of “File/Symbol File Path”. So what I found that you need to change it through the Windbg command window by going to:

“View/Command”

In the bottom of the command window beside the “kd>” prompt type this in:

.sympath SRV*e:symbols*http://msdl.microsoft.com/download/symbols

The part between the two asterisks (*) is where the symbols from Microsoft’s servers will be downloaded to. It’s fairly large (approximately 22MB) so make sure that you have sufficient disk space.

SETTING SYMBOL FILE PATH IN THE ENVIRONMENT VARIABLE:

Alternatively, you can set it in your environment variable either in your system or user environment variable. To do this, click the WINDOWS KEY+e. The WINDOWS KEY is the key to the right of the LEFT CTRL key of the keyboard. This will open up Windows Explorer.

Then click on the “Advanced system settings” at the top left of the window. This step applies to Vista only. For XP users, simply click on the Advanced tab.

Then click on the button “Environment variable” at the bottom of the window.

Then click on the “New” button under System Variables. Again you can create the environment as a user environment variable instead.

In the “Variable Name” type:
_NT_SYMBOL_PATH

In the “Variable Value” type:
symsrv*symsrv.dll*e:symbols*http://msdl.microsoft.com/download/symbols

If you set the symbol file path as a system environment variable I believe you may have to reboot your computer in order for it to take effect.

OUTPUT OF WINDBG COMMAND

So the following is the output for my crash:

Microsoft (R) Windows Debugger Version 6.11.0001.404 X86
Copyright (c) Microsoft Corporation. All rights reserved.

Loading Dump File [c:windowsminidumpmini062609-01.dmp]
Mini Kernel Dump File: Only registers and stack trace are available

Symbol search path is: SRV*e:symbols*http://msdl.microsoft.com/download/symbols;I:symbols
Executable search path is:
Windows Server 2008/Windows Vista Kernel Version 6001 (Service Pack 1) MP (2 procs) Free x86 compatible
Product: WinNt, suite: TerminalServer SingleUserTS Personal
Built by: 6001.18226.x86fre.vistasp1_gdr.090302-1506
Machine Name:
Kernel base = 0x8201d000 PsLoadedModuleList = 0x82134c70
Debug session time: Fri Jun 26 16:25:11.288 2009 (GMT-7)
System Uptime: 0 days 21:39:36.148
Loading Kernel Symbols
………………………………………………………
……………………………………………………….
…………………………………………………..
Loading User Symbols
Loading unloaded module list
……………………….
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

Use !analyze -v to get detailed debugging information.

BugCheck A, {8cb5bcc0, 1b, 1, 820d0c1f}

Unable to load image SystemRootsystem32DRIVERSSymIMv.sys, Win32 error 0n2
*** WARNING: Unable to verify timestamp for SymIMv.sys
*** ERROR: Module load completed but symbols could not be loaded for SymIMv.sys
Unable to load image SystemRootsystem32DRIVERSNETw3v32.sys, Win32 error 0n2
*** WARNING: Unable to verify timestamp for NETw3v32.sys
*** ERROR: Module load completed but symbols could not be loaded for NETw3v32.sys
Processing initial command ‘!analyze -v’
Probably caused by : tdx.sys ( tdx!TdxMessageTlRequestComplete+94 )

Followup: MachineOwner
———

0: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

IRQL_NOT_LESS_OR_EQUAL (a)
An attempt was made to access a pageable (or completely invalid) address at an
interrupt request level (IRQL) that is too high.  This is usually
caused by drivers using improper addresses.
If a kernel debugger is available get the stack backtrace.
Arguments:
Arg1: 8cb5bcc0, memory referenced
Arg2: 0000001b, IRQL
Arg3: 00000001, bitfield :
bit 0 : value 0 = read operation, 1 = write operation
bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status)
Arg4: 820d0c1f, address which referenced memory

Debugging Details:
——————

WRITE_ADDRESS: GetPointerFromAddress: unable to read from 82154868
Unable to read MiSystemVaType memory at 82134420
8cb5bcc0

CURRENT_IRQL:  1b

FAULTING_IP:
nt!KiUnwaitThread+19
820d0c1f 890a            mov     dword ptr [edx],ecx

CUSTOMER_CRASH_COUNT:  1

DEFAULT_BUCKET_ID:  VISTA_DRIVER_FAULT

BUGCHECK_STR:  0xA

PROCESS_NAME:  System

TRAP_FRAME:  821126c4 — (.trap 0xffffffff821126c4)
ErrCode = 00000
002
eax=85c5d4d8 ebx=00000000 ecx=8cb5bcc0 edx=8cb5bcc0 esi=85c5d420 edi=ed9c7048
eip=820d0c1f esp=82112738 ebp=8211274c iopl=0         nv up ei pl nz na pe nc
cs=0008  ss=0010  ds=0023  es=0023  fs=0030  gs=0000             efl=00010206
nt!KiUnwaitThread+0×19:
820d0c1f 890a            mov     dword ptr [edx],ecx  ds:0023:8cb5bcc0=????????
Resetting default scope

LAST_CONTROL_TRANSFER:  from 820d0c1f to 82077d24

STACK_TEXT:
821126c4 820d0c1f badb0d00 8cb5bcc0 87952ed0 nt!KiTrap0E+0x2ac
8211274c 8205f486 00000002 85c5d420 ed9c7048 nt!KiUnwaitThread+0×19
82112770 8205f52a ed9c7048 ed9c7008 00000000 nt!KiInsertQueueApc+0x2a0
82112790 8205742b ed9c7048 00000000 00000000 nt!KeInsertQueueApc+0x4b
821127c8 8f989cd0 e79e1e88 e79e1f70 00000000 nt!IopfCompleteRequest+0×438
821127e0 8a869ce7 00000007 00000000 00000007 tdx!TdxMessageTlRequestComplete+0×94
82112804 8a869d33 e79e1f70 e79e1e88 00000000 tcpip!UdpEndSendMessages+0xfa
8211281c 8a560c7f e79e1e88 00000001 00000000 tcpip!UdpSendMessagesDatagramsComplete+0×22
8211284c 8a86e0ab 00000000 00000000 889a0558 NETIO!NetioDereferenceNetBufferListChain+0xcf
82112860 8a6d341e 878689e8 e79e1e88 00000000 tcpip!FlSendNetBufferListChainComplete+0x1c
82112894 8a6084f1 86c440e8 e79e1e88 00000000 NDIS!ndisMSendCompleteNetBufferListsInternal+0xb8
821128a8 8fe3f0ee 87a092b0 e79e1e88 00000000 NDIS!NdisFSendNetBufferListsComplete+0x1a
821128cc 8a6084f1 87a07230 e79e1e88 00000000 pacer!PcFilterSendNetBufferListsComplete+0xba
821128e0 8fe516f7 88940c10 e79e1e88 00000000 NDIS!NdisFSendNetBufferListsComplete+0x1a
WARNING: Stack unwind information not available. Following frames may be

Posted in TUTORIALS | Tagged: , , , , , | Leave a Comment »

Data Compression in SQL Server 2008

Posted by Alin D on December 15, 2010

Data compression is a new feature introduced in SQL Server 2008. It enables the DBA’s to effectively manage the MDF files and Backup files. There are two types of compressions,

1. Row Level Compression: This type of compression will work on the row level of the data page.

  • Operations like changing the fixed length datatype to Variable length data type. For instance, Char(10) is a fixed length datatype and If we store “Venkat” as data. The space occupied by this name is 6 and remaining 4 spaces will be wasted in the legacy system. Whereas, In SQL Server 2008, it is utilised effectively. Only 6 spaces will be given to this variable.
  • Removal of Null value and zeros. These values will not be stored in the disk. Instead, they will have a reference in the CI Structure.
  • Reduce the amount of metadata used to store the row.

2. Page Level Compression: This compression will be effective in the page level.

  • This compression follows Row Level Compression. On top of that, Page level compression will work.
  • Prefix Compression – This compression will work on the column level. Repeated data will be removed and a reference will be stored in the Compression information (CI) structure which is located next to the page header.
  • Dictionary Compression – This compression will be implemented as a whole on the page. It will remove all the repeated data and a reference will be placed on the page.

How it works:

Considering, If you a user is requesting for a data. In that case, Relational Engine will take care of getting the request compile, parse and it will request the data from the Storage engine.

Data Compression

Data Compression

Now, our data is in the compressed format. Storage engine will send the compressed data to the Buffer cache which in turn will take care of sending the data to relational engine in uncompressed format. Relational engine will do the modifications on the uncompressed data and it will send the same to buffer cache. Buffer cache will take care of compressing the data and have it for future use. In turn, it will send a copy to the Storage Engine.

Advantages:

  1. More data will be stored in the Buffer cache. So, no need to go and search in the disk which inturn reduce the I/O.
  2. Disk space is highly reduced.

Disadvantages:

  1. More CPU cycles will be used to decompress the data.
  2. It will be a negative impact, if the data doesn’t have more null values, zeros and compact/exact data (Equivalent to the declared data type).

Posted in SQL | Tagged: , , , , | Leave a Comment »

Identity Property Range Checking in SQL Server

Posted by Alin D on December 15, 2010

The IDENTITY property for a column of a numerical data type is a frequently used method to achieve system-generated “uniqueness” for each row in a table. Such a column then in turn is a quite popular choice for the PRIMARY KEY constraint. Most of the times one would choose the data type int for the underlying column. However, the IDENTITY property can be defined on any integer-like data type and even on the decimal data type as long as the chosen scale is 0. By default SQL Server picks only the positive values unless you specify otherwise. So, when you opt to start with a negative seed value, this is perfectly fine for SQL Server and by doing so, you essentially double the range of possible values for most of the available data types. It may hurt one’s aesthetic experience, but if you take negative values into account, this gives you the following range of possible values:

tinyint 0 – 255
smallint -32.768 32.767
int -2.147.483.648 2.147.483.647
bigint -2^63 2^63-1

If you decide to use a decimal data type such as decimal(38, 0) this gives you a range of -10^38 to 10^38-1 possible values, which, for almost any practical purposes should be more than enough.

But what can actually happen if  you are about to exceed this range?

Let’s create a very simple test case:

CREATE TABLE dbo.id_overflow (
    col1 int IDENTITY(2147483647,1)
);
GO

The above script creates a new table dbo.id_overflow with only one column col1. This column is of type int with the IDENTITY property defined on it. The seed value is chosen to be the maximum value for the int type which is 2147483647. I just arbitrarily picked the int data type, I could have chosen any other eligible data type, the result would still be the same. So, when we now insert into this table, the very first insert statement is likely to succeed, while any subsequent one will fail with an arithmetic overflow error.

--This insert will succeed
INSERT INTO dbo.id_overflow DEFAULT VALUES;
--This insert will fail
INSERT INTO dbo.id_overflow DEFAULT VALUES;

(1 row(s) affected)
Msg 8115, Level 16, State 1, Line 2
Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.

So far, everything is as expected and when we look at the content of the table we only see the one row from the first insert.

SELECT
    *
FROM
    dbo.id_overflow;

col1
2147483647

(1 row(s) affected)

But what do you do in such a case? You can’t insert any more rows into this table. Even if there might be gaps in the sequence of the existing IDENTITY values, these gaps won’t be reused automatically. Once allocated, SQL Server doesn’t care about them and if an insert doesn’t succeed for whatever reason, this just freshly allocated value is gone.

Essentially, the only feasible solution to this problem is to choose a “bigger” data type. So, a very simplified change script to change the data type in our example to bigint would look like this:

IF OBJECT_ID('dbo.id_overflow') IS NOT NULL
    DROP TABLE dbo.id_overflow;
GO
CREATE TABLE dbo.id_overflow (
    col1 int IDENTITY(2147483647,1)
)
GO

--This insert will succeed
INSERT INTO dbo.id_overflow DEFAULT VALUES;

--Now change the data type to a bigger one.
ALTER TABLE dbo.id_overflow ALTER COLUMN col1 bigint;

--This insert will now succeed as well
INSERT INTO dbo.id_overflow DEFAULT VALUES;

SELECT
    *
FROM
    dbo.id_overflow;

If you run this batch, it will finish without an error and yield the expected resultset of 2 rows. But, as mentioned above, a change script in almost any real-world database would be much more complex. Indexes would have to be changed, referencing tables would have to be changed, code parts where the value of that column is assigned to a variable of type int, etc…

It is not hard to predict, that you’re in deep trouble when this table is one of your main tables in a database and is referenced by many other tables and/or in many places in your code.

I was  bitten by a similar scenario not that long ago. Fortunately it was “only” a lookup table with an IDENTITY column on a smallint data typed column. And I was fortunate that I could simply reseed the IDENTITY value because the last +7000 inserts failed due to a misunderstanding between the developers of the calling application and me on how a certain parameter to a procedure should be used. But it still was enough trouble for me to decide to write a small check script that is now part of my weekly scripts and that gives me all the tables having such an IDENTITY column along with the last value consumed as well as the buffer I have left before I run out of values again. Here it is:

;WITH TypeRange AS (
SELECT
    'bigint' AS [name],
    9223372036854775807 AS MaxValue,
    -9223372036854775808 AS MinValue
UNION ALL
SELECT
    'int',
    2147483647,
    -2147483648
UNION ALL
SELECT
    'smallint',
    32767,
    -32768
UNION ALL
SELECT
    'tinyint',
    255,
    0
),
IdentBuffer AS (
SELECT
    OBJECT_SCHEMA_NAME(IC.object_id) AS [schema_name],
    O.name AS table_name,
    IC.name AS column_name,
    T.name AS data_typ,
    CAST(IC.seed_value AS decimal(38, 0)) AS seed_value,
    IC.increment_value,
    CAST(IC.last_value AS decimal(38, 0)) AS last_value,
    CAST(TR.MaxValue AS decimal(38, 0)) -
        CAST(ISNULL(IC.last_value, 0) AS decimal(38, 0)) AS [buffer],
    CAST(CASE
            WHEN seed_value < 0
            THEN TR.MaxValue - TR.MinValue
            ELSE TR.maxValue
        END AS decimal(38, 0)) AS full_type_range,
    TR.MaxValue AS max_type_value
FROM
    sys.identity_columns IC
    JOIN
    sys.types T ON IC.system_type_id = T.system_type_id
    JOIN
    sys.objects O ON IC.object_id = O.object_id
    JOIN
    TypeRange TR ON T.name = TR.name
WHERE
    O.is_ms_shipped = 0)

SELECT
    IdentBuffer.[schema_name],
    IdentBuffer.table_name,
    IdentBuffer.column_name,
    IdentBuffer.data_typ,
    IdentBuffer.seed_value,
    IdentBuffer.increment_value,
    IdentBuffer.last_value,
    IdentBuffer.max_type_value,
    IdentBuffer.full_type_range,
    IdentBuffer.buffer,
    CASE
        WHEN IdentBuffer.seed_value < 0
        THEN (-1 * IdentBuffer.seed_value +
          IdentBuffer.last_value) / IdentBuffer.full_type_range
        ELSE (IdentBuffer.last_value * 1.0) / IdentBuffer.full_type_range
    END AS [identityvalue_consumption_in_percent]
FROM
    IdentBuffer
ORDER BY
    [identityvalue_consumption_in_percent] DESC;

Since SQL Server 2005 it has been really easy to get this information. As you can see from the script, I have omitted the decimal(38,0) alternative. For me, a bigint column with a negative seed value is more than I would possibly ever need. I got into the habit of running this daily to monitor how many values we have left in the buffer before it blows up again and to get a feeling for “how urgent” it is to look at the inevitable changes to the database. Possible other variations would be to send out an alert when a certain threshold is reached but that I leave up to your fantasy.

Posted in SQL | Tagged: , , , , , | Leave a Comment »

Sharepoint Extranet Setup with Forms Based Authentication

Posted by Alin D on December 15, 2010

At some point in your company’s SharePoint usage, you will probably want to expand the usage of your Intranet sites to clients. Extranet deployment usually requires some additional server and license resources which can add to the expense of the Sharepoint deployment. Fortunately, SharePoint Server 2007 has the Authentication Zones feature, which allows you to setup different authentication methods for your employers and customers and minimize the additional hardware and software licenses required.

In this article we will be configuring forms based authentication with newest version of Microsoft Office SharePoint Server with Service Pack 2 on Windows Server 2008.

SharePoint authentication zones

By default, on SharePoint applications there is only one default zone configured, which corresponds to our LDAP (Active Directory) authentication mode. However, there are several other zones that can be used for authenticating site users (see screen below)

Alternate Access Mapping Collection

Alternate Access Mapping Collection

In this instance we will be configuring our Extranet zone with Forms Based Authentication, so our external users (clients/customers) would be using different credentials database. In most cases we do not want external users to have any accounts in the Active Directory as it will be a drain on resources to have an Active Directory only for these users. Therefore, in this scenario we will be using ASP .NET functionality to store user credentials in MS SQL Database.

Configure Extranet zones with users stored in a SQL Server Database

We need set the ASP .NET services engine to use a SQL Server database to store user credentials, as well as membership, profiles and the SQL Web event provider. To do this, you will need to run aspnet_regsql.exe located in theC:WindowsMicrosoft.NETFrameworkv2.0.50727 folder (or C:WindowsMicrosoft.NETFramework64v2.0.50727 for 64-bit OS’s).

After reading the application description in the first screen and clicking next, we then ensure that Configure SQL Server for application services is selected and click Next (see screenshot below).

Configure SQL Server for application services

Configure SQL Server for application services

Next, we enter our SQL Server credentials. This is a very useful feature because we can use the same SQL Server instance that is used for SharePoint to avoid the expense of purchasing an additional SQL Server license for external user authentication. Alternatively we could install the free SQL Server Express which is capable of handling Forms based credentials.

Select Servers And Databases

Select Servers And Databases

Next, confirm that the SQL Server credentials for ASP .NET services are correct, and click Next. By default, ASPNET_RegSQL.exe will be using the ‘aspnetdb’ database for storing user data.

Confirm your settings

Confirm your settings

Now we must configure the provider for membership, profiles and the role manager in SharePoint.

First we need to expand our Intranet site that is in the Default Zone (with default, Active Directory based Authentication). This is done in Central Administration / Create or Extend Web Application.

Central Administration Create or extend Website

Central Administration Create or extend Website

Select Extend an existing Web application and then select the web application we need to extend to external, SQL Server based users.

 Extend an existing Web application

Extend an existing Web application

The most important part of the configuration forms after you select the correct application to extend, is on the screen below.

Configuration part

Configuration part

We need to enter the external host name that will be visible from every workstation, so it’s important to have a good domain for our extranet site as it will be probably used by our clients and customers. We may also need to enable anonymous authentication, but in this scenario we won’t be using that for our Extranet site.

At the bottom of the configuration, we need to select the correct zone for our newly extended site. Here we will select Extranet.

Load Balanced URL

Load Balanced URL

Before you accept these changes, ensure that NTLM authentication is selected, which is the only supported mode for Forms Based Authentication.

Posted in TUTORIALS | Tagged: , , , , , , , , , , , , , , , , , , | 1 Comment »

SQL Azure Limitations and Supported Features

Posted by Alin D on December 15, 2010

Even though SQL Azure is based on SQL Server, it includes some limitations because of its Internetavailability and cloud deployment. When you use SQL Server on-premises, the tools and client APIs havefull access to the SQL Server instance, and communications between the client and the database are in ahomogeneous and controlled environment.The first release of SQL Azure has only limited functionality of the SQL Server database. One of themost important limitations in SQL Azure is that fact that the size of the database can’t exceed 10GB. So,as a database administrator or an architect, you must plan the growth and availability of dataaccordingly. The supported and unsupported features of SQL Azure in version 1.0 are described below.

Database Features

SQL Azure supports the following database features:
• CRUD operations on tables, views, and indexes
• TSQL query JOIN statements
• Triggers
• TSQL functions
• Application stored procedures (only TSQL)
• Table constraints
• Session-based temp tables
• Table variables
• Local transactions
• Security roles
SQL Azure does not support the following database features:
• Distributes query
• Distributed transactions
• Any TSQL query and views that change or retrieve physical resource information,
like physical server DDL statements,1 Resource Governor, and file group
references
• Spatial data types

Application Features

SQL Azure does not support the following application-level features:
• Service Broker
• HTTP access
• CLR stored procedures

Administration Features

SQL Azure supports the following administration features:
• Plan and statistics
• Index tuning
• Query tuning
SQL Azure does not support the following administration features:
• Replication
• SQL profiler
• SQL trace flag
• Backup command
• Configuration using the sp_configure stored procedure

Posted in Azure, SQL | Tagged: , , , , , | Leave a Comment »

MS10-106 patches DoS vulnerability in Exchange 2007

Posted by Alin D on December 15, 2010

December’s round of patches from Microsoft includes a patch for Microsoft Exchange 2007 SP2. This vulnerability is rated as a moderate, but I know several C level types who would consider anything that interrupts email as nothing short of a national disaster.

This vulnerability, which may also be discussed in CVE-2010-3937 (under review at the time of this writing,) can be exploited by an authenticated user making a specially crafted RPC call to an Exchange 2007 SP2 server running the mailbox role. Microsoft rates this as a moderate severity. Respectfully, I beg to differ.

Consider the Denial of Service attack for a moment. It is exactly as the name indicates, an attack that denies legitimate access to the service provided. Now consider the number of mission critical processes that depend on your email systems every day. What happens to those processes when the email system is unavailable?

I see this at many of the clients I work with; business processes that depend on email and that require an almost ACID approach, even though that is not really possible with a service that offers store and forward, best effort retry, and that uses the Internet. With smartphones and Blackberries, many companies can maintain a semblance of business as usual even when a site goes offline for hours or days, but if email is down for even a moment, heads can roll.

While this particular attack requires an authenticated user, so too do many others. It is not terribly difficult to convince a user to run software, especially with the number of plugins that combine Outlook with social networking sites. And as companies move their email system to the cloud, outsourcing what is looked at as a utility service on one hand, and as the most important, mission critical system in the company on the other, what many of them do not realise is that the outsource provider may be hosting their email on a system that also hosts mail for dozens of other companies. All of those users are making authenticated calls to the system. Do they meet your patching and antivirus standards?

This vulnerability exists only in Exchange 2007 SP2. Companies that have moved to SP3, or to Exchange 2010, are not at risk, but it is worthwhile to note that in the MS10-106 bulletin, Microsoft states:

“The majority of customers have automatic updating enabled and will not need to take any action because this security update will be downloaded and installed automatically.”

Show of hands…how many of you automatically patch your Exchange servers? Anyone? Anyone? For those of you who outsource your mail, how many of you know what patch level your provider maintains on the systems hosting your mail? At one previous employer, I did an assessment of the three hosted mail systems and found them to be on three different patch versions… the most recent was over a year out of date.

If you are running Exchange 2007 SP2, I urge you to treat this vulnerability as severe, and patch it as soon as you can test it in your environment. Better still, apply SP3. If you have outsourced your email, contact your provider to confirm what version of Exchange your email is on, and review their patching policies. It’s called due diligence, is a reasonable request, and how they respond may tell you more than the pre-sales guy ever did.

Posted in Exchange | Tagged: , , , , , , , , , , | Leave a Comment »