Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘Data’

Data Compression in SQL Server 2008

Posted by Alin D on December 15, 2010

Data compression is a new feature introduced in SQL Server 2008. It enables the DBA’s to effectively manage the MDF files and Backup files. There are two types of compressions,

1. Row Level Compression: This type of compression will work on the row level of the data page.

  • Operations like changing the fixed length datatype to Variable length data type. For instance, Char(10) is a fixed length datatype and If we store “Venkat” as data. The space occupied by this name is 6 and remaining 4 spaces will be wasted in the legacy system. Whereas, In SQL Server 2008, it is utilised effectively. Only 6 spaces will be given to this variable.
  • Removal of Null value and zeros. These values will not be stored in the disk. Instead, they will have a reference in the CI Structure.
  • Reduce the amount of metadata used to store the row.

2. Page Level Compression: This compression will be effective in the page level.

  • This compression follows Row Level Compression. On top of that, Page level compression will work.
  • Prefix Compression – This compression will work on the column level. Repeated data will be removed and a reference will be stored in the Compression information (CI) structure which is located next to the page header.
  • Dictionary Compression – This compression will be implemented as a whole on the page. It will remove all the repeated data and a reference will be placed on the page.

How it works:

Considering, If you a user is requesting for a data. In that case, Relational Engine will take care of getting the request compile, parse and it will request the data from the Storage engine.

Data Compression

Data Compression

Now, our data is in the compressed format. Storage engine will send the compressed data to the Buffer cache which in turn will take care of sending the data to relational engine in uncompressed format. Relational engine will do the modifications on the uncompressed data and it will send the same to buffer cache. Buffer cache will take care of compressing the data and have it for future use. In turn, it will send a copy to the Storage Engine.

Advantages:

  1. More data will be stored in the Buffer cache. So, no need to go and search in the disk which inturn reduce the I/O.
  2. Disk space is highly reduced.

Disadvantages:

  1. More CPU cycles will be used to decompress the data.
  2. It will be a negative impact, if the data doesn’t have more null values, zeros and compact/exact data (Equivalent to the declared data type).

Posted in SQL | Tagged: , , , , | Leave a Comment »

Using Strip Set While Initializing a Volume May Cause Data Loss in Windows

Posted by Alin D on July 3, 2010

Windows 7 Administration

Using Strip Set While Initializing a Volume May Cause Data Loss in Windows

In Microsoft Windows NT-based operating system, you can easily create software RAID 5 (Redundant Array or Independent Disks) stripe sets with the parity partitions. The RAID 5 array stripe set store stripping data with the parity information across several drives in the 64 Kilobyte stripes. While initializing the stage for creating stripe set with the parity, Windows calculates parity information for every 64 KB stripe. In case if you use the volume during initialization, you might get into data loss problems and need to opt for windows data recovery solution to sort out the problem.

You are not suppressed by Microsoft Windows Disk Administrator to format stripe set and immediately use volume while the initialization is going on. Microsoft Windows also doesn’t allow you to use the volume, which is being retrieved during the regenerating after the hard drive failure. It is due to the fact that FTDISK ensures that all of the modifications made to hard drive are mirrored in parity block. Drive handle all the synchronization issues including writing of data catching up with initialization of parity.

However, during the regeneration or initialization process of volume, this is “exposed”; i.e. when a drive gets damaged or failed while initialization of stripe set with the parity is going on, you would lose whole set. This issue occurs only after stripe set is initialized, on which you have a fault tolerance.

If you select to use volume during the regeneration or initialization process, you get into risk of the data loss. When the initialization process gets terminated of fails, entire volume may lost. Thus, you should wait for the regeneration or initialization process to complete successfully, before using it to make sure that you have integrity and protection of all your valuable data.

In such critical situations, you need to restore your data from backup. In case if the backup is not available or damaged, you need to opt for windows recovery to get your mission-critical data back. You can easily retrieve your data through Windows Data Recovery Software.

The applications perform in-depth scan of affected drive using efficient scanning algorithms and ensure perfect recovery. The Windows Partition Recovery tools are very easy to use and thus do not demand sound and prior technical skills.

Stellar Phoenix Windows Data Recovery is the most advanced and effective tool to successfully recovery all of your lost, missing and inaccessible data. It recovers data from FAT32, VFAT, NTFS and NTFS5 file system partitions. The application works well with Windows 7, Vista, 2003, XP and 2000.

The author is a student of Mass Communication doing research on how to recover windows files with the help of data recovery software .

Posted in Windows 7 | Tagged: , , , , , , , | Leave a Comment »

PowerCLI data piped to SCOM via PowerWF

Posted by Alin D on June 18, 2010

In this demo I transcribe a PowerShell/PowerCLI script to a workflow, redirect the script output to WMI, deploy the workflow to the PowerWF agent, use PowerWF to create a management pack, import that pack into SCOM, then view the data being harvested on the agent by the PowerCLI script inside of SCOM. PowerWF Studio is a suite of tools leveraging PowerShell and Windows Workflow for automation and administration of physical and virtual environments. Leveraging VMware’s PowerCLI, VIX API plus several other PowerWF activities packs, PowerWF Studio for VMware offers levels of automation typically only seen in enterprise class solutions. The product supports management of VMware Server, Workstation, Player, and Virtual Infrastructure (both ESX and vCenter).

Posted in Windows 7 | Tagged: , , , , | Leave a Comment »