Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 721 other subscribers
  • SCCM Tools

  • Twitter Updates

  • Alin D

    Alin D

    I have over ten years experience of planning, implementation and support for large sized companies in multiple countries.

    View Full Profile →

Posts Tagged ‘API’

Five ways to optimize application compatibility in Windows Server 8

Posted by Alin D on December 12, 2011

By and large, Windows tries to remain as backwards-compatible as possible from version to version, but every now and then a set of changes comes along with the power to really disrupt things, e.g., the removal of all 16-bit app compatibility in Windows.

Both the client and server editions of Windows 8 have a few changes that can make current application compatibility a challenge. And not all of these challenges can be properly confronted by admins; some of them will need to be dealt with by the original application authors.

Most of the changes are explained in a newly updated document called the “Windows and Windows Server Developer Preview Compatibility Cookbook,” which examines  all  major app-compatibility obstacles for the current edition of Windows provides some solutions and workarounds.

Here’s a rundown of some of the most crucial application-compatibility issues to be aware of.

System version numbers. Yes, this old bugaboo is back in a slightly different form. Older applications which insist on a specific version of Windows may bomb, as Windows 8 reports itself as version 6.2. Such apps can be installed using Windows’s existing shims for overriding version-number reporting (e.g., the “Compatibility” tab in the app shortcut), but if you’re an app developer you should be that much more cautious about how you check version numbers. Microsoft recommends using the VerifyVersionInfo function in a sensible way (use “greater than,” not “equal to,” a given version number).

Headless server apps. This is one of the more major changes, since newer versions of Windows Server — Server Core, mainly — are designed to run not only headless but without a GUI at all. The GUI can be uninstalled in low-resource environments (for instance, a heavily-shared virtual machine) or to reduce the overall attack surface on the server.

Some server applications, however, may not run in a GUI-less setting. Any command-line version of an application should run fine, but anything that presents a GUI to the end user may not work at all. There is, at this time, no way to “wrap” a GUI application so that it behaves normally without a GUI.

If you’re planning on running any application in a Server Core, you should test it to make sure it behaves as expected without the GUI. If it doesn’t, and you have some power over how the app is written, you’ll need to read up on migrating existing code to Server Core and learn about which portions of the Win32 API and the .NET CLR are supported by Core.

.NET framework. Windows 8 features .NET 4.5 as part of its default installation bundle, butnot .NET 3.5. If you have anything that has been built explicitly to use 3.5 — not just standalone apps, but websites written for that edition of .NET — you’ll need to add that earlier edition of .NET by hand. Fortunately this doesn’t pose any major compatibility problems, since all the different editions of .NET run side by side. The Microsoft document has some notes on how to add .NET Framework 3.5 without triggering an automatic request for it from Windows Update.

Word has it that .NET 4.5 is actually part of the Windows RunTime (WinRT)  APIs that allow developers to create applications using  the new “Metro” look (less crucial for servers) and uses a sandboxed programming model to quickly create programs for Microsoft’s Windows Store, among other things. If you intend to write or upgrade server apps for Windows 8, and you already know C# or C++, it shouldn’t be hard to get up to speed with WinRT, but that’s something worth exploring in its own article.

4K disk sectors. This may sound more like a hardware issue than an application issue, but it still deserves some mention here. Newer disks aimed at the server market now use 4K sectors instead of the old 512-byte sectoring scheme. 4K sector drives, called “Advanced Format” drives, can do some strange things to applications that expect 512-byte sectors. This despite the fact that many 4K drives come with a backwards-compatibility extension to emulate 512-byte sectors (“512e”). Windows 8 adds a new API for querying file sector sizes to get around this, and also makes changes to the fsutil command-line tool for making volume size queries in scripts.

Unsigned kernel-mode drivers. If you have any applications, whether third-party or crafted in-house, that use kernel-mode drivers, Windows Server has been secured all the more against using kernel-mode drivers as a source of malware. The biggest changes involve kernel-mode drivers for devices that use the unified extensible firmware interface (UEFI) Secured Boot function, which protects the machine against malware that injects itself into the pre-boot environment. UEFI Secure Boot is optional for servers but recommended. If you want to take advantage of UEFI Secure Boot on servers, you’re best off having kernel-mode drivers signed by some trusted certification authority. Otherwise, you’ll have to disable Secure Boot.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Clean Drive Script -CleanDrive.pl

Posted by Alin D on May 12, 2011

use Getopt::Long;
#use diagnostics;
#use strict;
use Win32::Console;
use File::Find;
use Cwd;
use Win32::API::Prototype;

##################
# main procedure #
##################
my (%config);
my ($files,$dirs);
my ($totalsize) = 0;

p_parsecmdline(%config, @ARGV);
p_checkargs();

# set console codepage
Win32::Console::OutputCP(1252);

# check drive is valid
my $drive = $config{drive};
$drive =~ s/W+//;
$drive .= “:\”;
if (! -d $drive) {
die “ERROR: $drive is not a valid drive”;
}
# compute log file name if it hasn’t been specified
unless (defined ($config{log})) {
my $time = time();
$time = localtime($time);
my @time = split(/s+/,$time);
$config{log} = “cleandrive-$time[4]$time[1]$time[2].log”;
}
unless ($config{log} =~ /\/) {
my $cwd = getcwd;
unless ($cwd =~ //$/) {
$cwd .= “/”;
}
$config{log} = “$cwd$config{log}”;
}
# delete temp dirs content and dump files
p_cleantemp($config{drive});
p_cleandumps($config{drive});
# apply ntfs compression
if (defined ($config{compress})) {
p_compressfolders($config{drive},$config{compress});
}
# process files to delete
if (defined ($config{filelist})) {
p_delfiles($config{drive},$config{filelist});
}
# print summary of what was done
if (defined ($files) and defined ($dirs)) {
if (defined ($config{test})) {
print “n$files files and $dirs directories would have been deleted.n”;
} else {
print “n$files files and $dirs directories were deleted.n”;
}

my $TotalSizeUnit = “bytes”;
my $count = 0;
while ($totalsize > 1024) {
$totalsize = $totalsize / 1024;
++$count;
}
if ($count == 1) {
$TotalSizeUnit = “KB”;
} elsif ($count == 2) {
$TotalSizeUnit = “MB”;
} elsif ($count == 3) {
$TotalSizeUnit = “GB”;
} elsif ($count == 4) {
$TotalSizeUnit = “TB”;
} elsif ($count == 5) {
$TotalSizeUnit = “PB”;
}

print “Total space saved: “;
printf (“%.2f”,$totalsize);
print ” $TotalSizeUnit.n”;
my $logpath = $config{log};
$logpath =~ s///\/g;
print “Log file is $logpathn”;
} else {
print “nThere were no files or directories to process.n”;
}

my $LocalHost = Win32::NodeName();
p_getfreespace($LocalHost,$config{drive});

exit (0);

##################
# sub-procedures #
##################

# procedure p_help
# displays a help message
sub p_help {
my ($script)=($0=~/([^\/]*?)$/);
my ($header)=$script.” v1.3.1 – Author: alin.dumenica@gmail.com”;
my ($line)=”-” x length($header);
print < <EOT;

$header
$line
Used to clean a logical drive. This script deletes temporary files,
kernel and user memory dumps as well as internet temporary files.
Optionally, it can also compress log directories and/or delete a
list of specified files and/or directories.

Usage: $script /[d]rive /[l]og
/[c]ompress /[f]ilelist
/[t]est /[v]erbose /[h]elp

/drive Logical drive letter to clean.
/log Name of logfile (default is cleandrive-YYYYmonthDD.log).
/compress Use NTFS compression on specified path (e.g. c:\temp).
A file containing a list of paths can also be specified.
File contains one entry per line.
/filelist Name of a file containing a list of files or
directories to delete (example of entry in file is
c:\temp; file contains one entry per line; wildcard
* can be used to specify that line applies to all drives).
/test Do not delete any files, but only log what would
be done.
/verbose Shows what is being done as it is being done.
/help Shows this help message.
EOT

exit 1;
}
# procedure p_parsecmdline
# parses the command line and retrieves arguments values
sub p_parsecmdline {
my ($config) = @_;
Getopt::Long::Configure(“prefix_pattern=(-|/)”);
GetOptions($config, qw(
drive|d=s
log|l=s
compress|c=s
filelist|f=s
test|t
verbose|v
help|?|h));
}
# procedure p_checkargs
# checks the arguments which have been used are a valid combination
sub p_checkargs {
if ($config{help}) {
p_help();
}
unless (defined ($config{drive})) {
p_help();
}
}
# procedure p_cleantemp
# deletes content of temporary directories
sub p_cleantemp {
my $drive = shift;
# strip drive letter of all non alpha-numeric characters
$drive =~ s/W+//;
my @temp;

# populate array @temp of temporary directories on $drive
if ((-d “$drive:/temp”) and (“$drive:\temp” ne lc($ENV{TEMP})) and (“$drive:\temp” ne lc($ENV{TMP}))) {
push (@temp,”$drive:/temp”);
}
if ((-d “$drive:/tmp”) and (“$drive:\tmp” ne lc($ENV{TEMP})) and (“$drive:\tmp” ne lc($ENV{TMP}))) {
push (@temp,”$drive:/tmp”);
}
# add environment variables that define temporary directories,
# only if they are located on $drive
if ($ENV{TEMP} =~ /^$drive/i) {
my $temp = $ENV{TEMP};
$temp =~ s/\///g;
push (@temp,$temp);
}
if (($ENV{TMP} =~ /^$drive/i) and (lc($ENV{TMP}) ne lc($ENV{TEMP}))) {
my $tmp = $ENV{TMP};
$tmp =~ s/\///g;
push (@temp,$tmp);
}

foreach my $dir (@temp) {
my $FormattedDir = $dir;
$FormattedDir =~ s///\/g;
print “n Processing ‘$FormattedDir’ directory” if defined ($config{verbose});
finddepth (&p_del, “$dir”);
}

# determine if $drive is the system drive, in which case process user profiles temp directories
if (lc($ENV{SYSTEMDRIVE}) eq lc(“$drive:”)) {
my @profiles = (“$ENV{SYSTEMROOT}\Profiles”,”$drive:\Documents and Settings”);
foreach my $dir (@profiles) {
if (-d $dir) {
opendir (PROFILES, $dir) or next;
while (my $folder = readdir(PROFILES)) {
if (($folder eq “.”) or ($folder eq “..”)) {
next;
}
my $path = “$dir\$folder”;
if (-d “$path\Local Settings\temp”) {
print “n Processing ‘$path\Local Settings\temp’ directory” if defined ($config{verbose});
finddepth (&p_del, “$path\Local Settings\temp”);
}
if (-d “$path\Local Settings\Temporary Internet Files”) {
print “n Processing ‘$path\Local Settings\Temporary Internet Files’ directory” if defined ($config{verbose});
finddepth (&p_del, “$path\Local Settings\Temporary Internet Files”);
}
}
closedir (PROFILES);
}
}
# process SYSTEMROOTtemp if it exists
if (-d “$ENV{SYSTEMROOT}\temp”) {
my $path = “$ENV{SYSTEMROOT}\temp”;
print “n Processing ‘$path’ directory” if defined ($config{verbose});
finddepth (&p_del, “$path”);
}
}
}
# procedure p_cleandumps
# deletes kernel and user memory dumps
sub p_cleandumps {
my $drive = shift;
$drive =~ s/W+//;
# search logical drive for memory.dmp and user.dmp files and delete them
if (lc($ENV{SYSTEMDRIVE}) eq lc(“$drive:”)) {
# test for memory.dmp and delete it if it is 5 days or older only
if (-f “$ENV{SYSTEMROOT}\memory.dmp”) {
open (FILE, “$ENV{SYSTEMROOT}\memory.dmp”);
my @filestat = stat(FILE);
my $time = time();
my $seconds = $time – $filestat[9];
if ($seconds > 432000) {
print “n Processing $ENV{SYSTEMROOT}\memory.dmp” if defined ($config{verbose});
if (defined ($config{test})) {
p_log($config{log},”File $ENV{SYSTEMROOT}\memory.dmp would have been deleted.n”);
++$files;
$totalsize += $filestat[7];
} elsif (unlink (“$ENV{SYSTEMROOT}\memory.dmp”)) {
p_log($config{log},”File $ENV{SYSTEMROOT}\memory.dmp was deleted.n”);
++$files;
$totalsize += $filestat[7];
}
}
close (FILE);
}
if (-d “$ENV{SYSTEMROOT}\Minidump”) {
print “n Processing the ‘$ENV{SYSTEMROOT}\Minidump’ directory” if defined ($config{verbose});
opendir (MINIDUMP, “$ENV{SYSTEMROOT}\Minidump”);
while (my $file = readdir(MINIDUMP)) {
# delete mini dumps that are older than 5 days
next if (($file eq “.”) or ($file eq “..”));
my $path = “$ENV{SYSTEMROOT}\Minidump”;
if (-f “$path\$file”) {
my @filestat = stat(FILE);
my $time = time();
my $seconds = $time – $filestat[9];
if ($seconds > 432000) {
if (defined ($config{test})) {
p_log($config{log},”File $path\$file would have been deleted.n”);
++$files;
$totalsize += $filestat[7];
} elsif (unlink (“$path\$file”)) {
p_log($config{log},”File $path\$file was deleted.n”);
++$files;
$totalsize += $filestat[7];
}
}
}
}
closedir (MINIDUMP);
}
}
}
# procedure p_compressfolders
# applies NTFS compression to path(s)
sub p_compressfolders {
my ($drive,$folders) = @_;
my (@folderlist);
$drive =~ s/W+//;
# determine if a filename has been specified, if so, then call sub p_readfile
if (-f $folders) {
@folderlist = p_readfile($folders);
} else {
@folderlist = $folders;
}
print “n Applying NTFS compression” if defined ($config{verbose});
# for each entry in the array, apply NTFS compression after making sure the path is valid
# and that the path is on the drive being cleaned
foreach my $file (@folderlist) {
$file =~ s/^*:/$drive:/i;
if ((-d “$file”) and ($file =~ /^$drive:/i)) {
print “.” if defined ($config{verbose});
if (defined ($config{test})) {
p_log($config{log},”Would have attempted to NTFS compress $filen”);
} else {
`compact /C /S /I $file`;
p_log($config{log},”Attempted to NTFS compress $filen”);
}
}
}
}
# procedure p_readfile
# reads the content of a file into an array
sub p_readfile {
my $file = shift;
my (@list);
# open handle to the file
open (FILE,$file);
my $i = 0;
while (defined (my $entry = )) {
chomp ($entry);
$list[$i] = $entry;
++$i;
}
return (@list);
}
# procedure p_delfiles
# delete specified files
sub p_delfiles {
my ($drive,$files) = @_;
my (@filelist);
$drive =~ s/W+//;
# call sub p_readfile
if (-f $files) {
@filelist = p_readfile($files);
print “n Processing entries in ‘$files'” if defined ($config{verbose});
# for each element in the array, delete the file or directory
foreach my $file (@filelist) {
$file =~ s/^*:/$drive:/i;
if ((-d $file) and ($file =~ /^$drive:/i)) {
finddepth (&p_del, “$file”);
} elsif ((-f $file) and ($file =~ /^$drive:/i)) {
print “.” if defined ($config{verbose});
open (FILE,”$file”);
my @filestat = stat(FILE);
close (FILE);
if (defined ($config{test})) {
p_log($config{log},”File $file would have been deleted.n”);
++$files;
$totalsize += $filestat[7];
} elsif (unlink($file)) {
p_log($config{log},”File $file was deleted.n”);
++$files;
$totalsize += $filestat[7];
}
}
}
}
}
# procedure p_del
# used by File::Find call to delete files or directories
sub p_del {
if (-d $File::Find::name) {
if (defined ($config{test})) {
p_log($config{log},”Directory $File::Find::name would have been removed.n”);
++$dirs;
} elsif (rmdir(“$File::Find::name”)) {
p_log($config{log},”Directory $File::Find::name was removed.n”);
++$dirs;
}
print “.” if defined ($config{verbose});
} elsif (-f $File::Find::name) {
open (FILE,”$File::Find::name”);
my @filestat = stat(FILE);
close (FILE);
if (defined ($config{test})) {
p_log($config{log},”File $File::Find::name would have been deleted.n”);
++$files;
$totalsize += $filestat[7];
} elsif (unlink(“$File::Find::name”)) {
p_log($config{log},”File $File::Find::name was deleted.n”);
++$files;
$totalsize += $filestat[7];
}
print “.” if defined ($config{verbose});
}
}
# procedure p_log
# manages creating log entries
sub p_log {
my ($logfile,$message) = @_;
my $time = time();
$time = localtime($time);
open (LOG, “>>$logfile”) or die “nERROR: could not open $logfile: $^En”;
$message =~ s///\/g;
print LOG “$time: $message”;
close (LOG);
}
# procedure p_getfreespace
# returns the number of free bytes on a remote drive
sub p_getfreespace {
my ($servername,$drive) = @_;
my $Win32Error = 0;
my $pFree = pack(“L2”,0,0);
my $pTotal = pack(“L2”,0,0);
my $pTotalFree = pack(“L2”,0,0);
my $path = “\\”.$servername.”\”.$drive.”$\”;

# import Win32API function
ApiLink(‘kernel32.dll’,’BOOL GetDiskFreeSpaceEx(
LPCTSTR lpDirectoryName,
PVOID lpFreeBytesAvailable,
PVOID lpTotalNumberOfBytes,
PVOID lpTotalNumberOfFreeBytes)’)
or die “nERROR: cannot link to GetDiskFreeSpaceExn”;

# make the function call
if (GetDiskFreeSpaceEx($path,$pFree,$pTotal,$pTotalFree)) {
# compute the number of free bytes
my $freespace = p_MakeLargeInt(unpack(“L2”,$pTotalFree));
my $TotalSpace = p_MakeLargeInt(unpack(“L2”,$pTotal));
my $SpaceUsed = $TotalSpace – $freespace;
my $PercentageUsed = ($SpaceUsed * 100) / $TotalSpace;

my $FreeSpaceUnit = “bytes”;
my $i = 0;
while ($freespace > 1024) {
$freespace = $freespace / 1024;
++$i;
}
if ($i == 1) {
$FreeSpaceUnit = “KB”;
} elsif ($i == 2) {
$FreeSpaceUnit = “MB”;
} elsif ($i == 3) {
$FreeSpaceUnit = “GB”;
} elsif ($i == 4) {
$FreeSpaceUnit = “TB”;
} elsif ($i == 5) {
$FreeSpaceUnit = “PB”;
}

my $TotalSpaceUnit = “bytes”;
$i = 0;
while ($TotalSpace > 1024) {
$TotalSpace = $TotalSpace / 1024;
++$i;
}
if ($i == 1) {
$TotalSpaceUnit = “KB”;
} elsif ($i == 2) {
$TotalSpaceUnit = “MB”;
} elsif ($i == 3) {
$TotalSpaceUnit = “GB”;
} elsif ($i == 4) {
$TotalSpaceUnit = “TB”;
} elsif ($i == 5) {
$TotalSpaceUnit = “PB”;
}

$freespace = p_FormatNumber($freespace);
$TotalSpace = p_FormatNumber($TotalSpace);
print “There now is “;
printf “%.2f”,$freespace;
print ” $FreeSpaceUnit available out of “;
printf “%.2f”,$TotalSpace;
print ” $TotalSpaceUnit (“;
printf “%.2f”,$PercentageUsed;
print “% used) on the $drive: drive.n”;
} else {
$Win32Error = Win32::GetLastError();
my $ErrorMessage = Win32::FormatMessage($Win32Error);
print “\\$servername\$drive$ ERROR $Win32Error: $ErrorMessage”;
}

exit $Win32Error;
}
# procedure p_MakeLargeInt
# convert number into a decimal number
sub p_MakeLargeInt {
my($Low,$High) = @_;
return($High*(1+0xFFFFFFFF)+$Low);
}
# procedure p_FormatNumber
# add comas in number to make it more readable
sub p_FormatNumber {
my($Num) = @_;
{} while ($Num =~ s/^(-?d+)(d{3})/$1,$2/);
return($Num);
}

Posted in Perl | Tagged: , , , , , | Leave a Comment »

How To Encrypt Connection Strings in ASP.NET

Posted by Alin D on November 2, 2010

Although a connection string stored in the web.config file is relatively safe since it will never be served to users by IIS, it is still best practice to encrypt all connection strings used in the ASP.NET application.

You can use the  Aspnet_regiis.exe tool to do this with the -pe (provider encryption) command option to encrypt sections of the Web.config file. To encrypt the connectionStrings section   run the below  command from the command prompt:

aspnet_regiis -pe "connectionStrings" -app "/MachineDPAPI"

-prov "DataProtectionConfigurationProvider"

In the above command, -pre specifies which configuration section to encrypt, –app specifies the virtual path to the application and -prov specifies the provider name (the .NET Framework supports RSAProtectedConfigurationProvider and DPAPIProtectedConfigurationProvider protected configuration providers)

  • RSAProtectedConfigurationProvider. The default provider which uses the RSA public key encryption to encrypt and decrypt data. You should use this provider to encrypt config files for used on several Web servers in a Web farm.
  • DPAPIProtectedConfigurationProvider. This provider uses the Windows Data Protection API (DPAPI) to encrypt and decrypt data. Use this provider to encrypt config files used on a single server.

It is not necessary to take any steps to decrypt the data since the ASP.NET runtime handles this seamlessly.

Note that you should also consider encrypting the <appSettings> , <indentity> and <sessionState> sections of the web.config file since these may also contain sensitive data.

Posted in TUTORIALS | Tagged: , , , , , , , | Leave a Comment »

Clean Drive – Perl Script

Posted by Alin D on October 15, 2010

Used to clean a logical drive. This script deletes temporary files, kernel and user memory dumps as well as internet temporary files. Optionally, it can also compress log directories and/or delete a list of specified files and/or directories.
Used to clean a logical drive. This script deletes temporary files, kernel and user memory dumps as well as internet temporary files. Optionally, it can also compress log directories and/or delete a list of specified files and/or directories.
Usage: $script /[d]rive <drive letter> /[l]og <filename>
/[c]ompress <path or filename> /[f]ilelist <filename>
/[t]est /[v]erbose /[h]elp

Variables

/drive      Logical drive letter to clean.

/log        Name of logfile (default is cleandrive-YYYYmonthDD.log).

/compress   Use NTFS compression on specified path (e.g. c:\temp).

A file containing a list of paths can also be specified.

File contains one entry per line.

/filelist   Name of a file containing a list of files or

directories to delete (example of entry in file is

c:\temp; file contains one entry per line; wildcard

* can be used to specify that line applies to all drives).

/test       Do not delete any files, but only log what would   be done.

/verbose    Shows what is being done as it is being done.

/help       Shows this help message.

use Getopt::Long;
#use diagnostics;
#use strict;
use Win32::Console;
use File::Find;
use Cwd;
use Win32::API::Prototype;

##################
# main procedure #
##################
my (%config);
my ($files,$dirs);
my ($totalsize) = 0;

p_parsecmdline(%config, @ARGV);
p_checkargs();

# set console codepage
Win32::Console::OutputCP(1252);

# check drive is valid
my $drive = $config{drive};
$drive =~ s/W+//;
$drive .= ":\";
if (! -d $drive) {
die "ERROR: $drive is not a valid drive";
}
# compute log file name if it hasn't been specified
unless (defined ($config{log})) {
my $time = time();
$time = localtime($time);
my @time = split(/s+/,$time);
$config{log} = "cleandrive-$time[4]$time[1]$time[2].log";
}
unless ($config{log} =~ /\/) {
my $cwd = getcwd;
unless ($cwd =~ //$/) {
$cwd .= "/";
}
$config{log} = "$cwd$config{log}";
}
# delete temp dirs content and dump files
p_cleantemp($config{drive});
p_cleandumps($config{drive});
# apply ntfs compression
if (defined ($config{compress})) {
p_compressfolders($config{drive},$config{compress});
}
# process files to delete
if (defined ($config{filelist})) {
p_delfiles($config{drive},$config{filelist});
}
# print summary of what was done
if (defined ($files) and defined ($dirs)) {
if (defined ($config{test})) {
print "n$files files and $dirs directories would have been deleted.n";
} else {
print "n$files files and $dirs directories were deleted.n";
}

my $TotalSizeUnit = "bytes";
my $count = 0;
while ($totalsize > 1024) {
$totalsize = $totalsize / 1024;
++$count;
}
if ($count == 1) {
$TotalSizeUnit = "KB";
} elsif ($count == 2) {
$TotalSizeUnit = "MB";
} elsif ($count == 3) {
$TotalSizeUnit = "GB";
} elsif ($count == 4) {
$TotalSizeUnit = "TB";
} elsif ($count == 5) {
$TotalSizeUnit = "PB";
}

print "Total space saved: ";
printf ("%.2f",$totalsize);
print " $TotalSizeUnit.n";
my $logpath = $config{log};
$logpath =~ s///\/g;
print "Log file is $logpathn";
} else {
print "nThere were no files or directories to process.n";
}

my $LocalHost = Win32::NodeName();
p_getfreespace($LocalHost,$config{drive});

exit (0);

##################
# sub-procedures #
##################

# procedure p_help
# displays a help message
sub p_help {
my ($script)=($0=~/([^\/]*?)$/);
my ($header)=$script."v 3.1 - Author: suparatul@gmail.com - http://windows-scripting.co.cc";
my ($line)="-" x length($header);
print <

$header
$line
Used to clean a logical drive. This script deletes temporary files,
kernel and user memory dumps as well as internet temporary files.
Optionally, it can also compress log directories and/or delete a
list of specified files and/or directories.

Usage: $script /[d]rive /[l]og
/[c]ompress
/[f]ilelist
/[t]est /[v]erbose /[h]elp

/drive Logical drive letter to clean.
/log Name of logfile (default is cleandrive-YYYYmonthDD.log).
/compress Use NTFS compression on specified path (e.g. c:\temp).
A file containing a list of paths can also be specified.
File contains one entry per line.
/filelist Name of a file containing a list of files or
directories to delete (example of entry in file is
c:\temp; file contains one entry per line; wildcard
* can be used to specify that line applies to all drives).
/test Do not delete any files, but only log what would
be done.
/verbose Shows what is being done as it is being done.
/help Shows this help message.
EOT

exit 1;
}
# procedure p_parsecmdline
# parses the command line and retrieves arguments values
sub p_parsecmdline {
my ($config) = @_;
Getopt::Long::Configure("prefix_pattern=(-|/)");
GetOptions($config, qw(
drive|d=s
log|l=s
compress|c=s
filelist|f=s
test|t
verbose|v
help|?|h));
}
# procedure p_checkargs
# checks the arguments which have been used are a valid combination
sub p_checkargs {
if ($config{help}) {
p_help();
}
unless (defined ($config{drive})) {
p_help();
}
}
# procedure p_cleantemp
# deletes content of temporary directories
sub p_cleantemp {
my $drive = shift;
# strip drive letter of all non alpha-numeric characters
$drive =~ s/W+//;
my @temp;

# populate array @temp of temporary directories on $drive
if ((-d "$drive:/temp") and ("$drive:\temp" ne lc($ENV{TEMP})) and ("$drive:\temp" ne lc($ENV{TMP}))) {
push (@temp,"$drive:/temp");
}
if ((-d "$drive:/tmp") and ("$drive:\tmp" ne lc($ENV{TEMP})) and ("$drive:\tmp" ne lc($ENV{TMP}))) {
push (@temp,"$drive:/tmp");
}
# add environment variables that define temporary directories,
# only if they are located on $drive
if ($ENV{TEMP} =~ /^$drive/i) {
my $temp = $ENV{TEMP};
$temp =~ s/\///g;
push (@temp,$temp);
}
if (($ENV{TMP} =~ /^$drive/i) and (lc($ENV{TMP}) ne lc($ENV{TEMP}))) {
my $tmp = $ENV{TMP};
$tmp =~ s/\///g;
push (@temp,$tmp);
}

foreach my $dir (@temp) {
my $FormattedDir = $dir;
$FormattedDir =~ s///\/g;
print "n Processing '$FormattedDir' directory" if defined ($config{verbose});
finddepth (&p_del, "$dir");
}

# determine if $drive is the system drive, in which case process user profiles temp directories
if (lc($ENV{SYSTEMDRIVE}) eq lc("$drive:")) {
my @profiles = ("$ENV{SYSTEMROOT}\Profiles","$drive:\Documents and Settings");
foreach my $dir (@profiles) {
if (-d $dir) {
opendir (PROFILES, $dir) or next;
while (my $folder = readdir(PROFILES)) {
if (($folder eq ".") or ($folder eq "..")) {
next;
}
my $path = "$dir\$folder";
if (-d "$path\Local Settings\temp") {
print "n Processing '$path\Local Settings\temp' directory" if defined ($config{verbose});
finddepth (&p_del, "$path\Local Settings\temp");
}
if (-d "$path\Local Settings\Temporary Internet Files") {
print "n Processing '$path\Local Settings\Temporary Internet Files' directory" if defined ($config{verbose});
finddepth (&p_del, "$path\Local Settings\Temporary Internet Files");
}
}
closedir (PROFILES);
}
}
# process SYSTEMROOTtemp if it exists
if (-d "$ENV{SYSTEMROOT}\temp") {
my $path = "$ENV{SYSTEMROOT}\temp";
print "n Processing '$path' directory" if defined ($config{verbose});
finddepth (&p_del, "$path");
}
}
}
# procedure p_cleandumps
# deletes kernel and user memory dumps
sub p_cleandumps {
my $drive = shift;
$drive =~ s/W+//;
# search logical drive for memory.dmp and user.dmp files and delete them
if (lc($ENV{SYSTEMDRIVE}) eq lc("$drive:")) {
# test for memory.dmp and delete it if it is 5 days or older only
if (-f "$ENV{SYSTEMROOT}\memory.dmp") {
open (FILE, "$ENV{SYSTEMROOT}\memory.dmp");
my @filestat = stat(FILE);
my $time = time();
my $seconds = $time - $filestat[9];
if ($seconds > 432000) {
print "n Processing $ENV{SYSTEMROOT}\memory.dmp" if defined ($config{verbose});
if (defined ($config{test})) {
p_log($config{log},"File $ENV{SYSTEMROOT}\memory.dmp would have been deleted.n");
++$files;
$totalsize += $filestat[7];
} elsif (unlink ("$ENV{SYSTEMROOT}\memory.dmp")) {
p_log($config{log},"File $ENV{SYSTEMROOT}\memory.dmp was deleted.n");
++$files;
$totalsize += $filestat[7];
}
}
close (FILE);
}
if (-d "$ENV{SYSTEMROOT}\Minidump") {
print "n Processing the '$ENV{SYSTEMROOT}\Minidump' directory" if defined ($config{verbose});
opendir (MINIDUMP, "$ENV{SYSTEMROOT}\Minidump");
while (my $file = readdir(MINIDUMP)) {
# delete mini dumps that are older than 5 days
next if (($file eq ".") or ($file eq ".."));
my $path = "$ENV{SYSTEMROOT}\Minidump";
if (-f "$path\$file") {
my @filestat = stat(FILE);
my $time = time();
my $seconds = $time - $filestat[9];
if ($seconds > 432000) {
if (defined ($config{test})) {
p_log($config{log},"File $path\$file would have been deleted.n");
++$files;
$totalsize += $filestat[7];
} elsif (unlink ("$path\$file")) {
p_log($config{log},"File $path\$file was deleted.n");
++$files;
$totalsize += $filestat[7];
}
}
}
}
closedir (MINIDUMP);
}
}
}
# procedure p_compressfolders
# applies NTFS compression to path(s)
sub p_compressfolders {
my ($drive,$folders) = @_;
my (@folderlist);
$drive =~ s/W+//;
# determine if a filename has been specified, if so, then call sub p_readfile
if (-f $folders) {
@folderlist = p_readfile($folders);
} else {
@folderlist = $folders;
}
print "n Applying NTFS compression" if defined ($config{verbose});
# for each entry in the array, apply NTFS compression after making sure the path is valid
# and that the path is on the drive being cleaned
foreach my $file (@folderlist) {
$file =~ s/^*:/$drive:/i;
if ((-d "$file") and ($file =~ /^$drive:/i)) {
print "." if defined ($config{verbose});
if (defined ($config{test})) {
p_log($config{log},"Would have attempted to NTFS compress $filen");
} else {
`compact /C /S /I $file`;
p_log($config{log},"Attempted to NTFS compress $filen");
}
}
}
}
# procedure p_readfile
# reads the content of a file into an array
sub p_readfile {
my $file = shift;
my (@list);
# open handle to the file
open (FILE,$file);
my $i = 0;
while (defined (my $entry = )) {
chomp ($entry);
$list[$i] = $entry;
++$i;
}
return (@list);
}
# procedure p_delfiles
# delete specified files
sub p_delfiles {
my ($drive,$files) = @_;
my (@filelist);
$drive =~ s/W+//;
# call sub p_readfile
if (-f $files) {
@filelist = p_readfile($files);
print "n Processing entries in '$files'" if defined ($config{verbose});
# for each element in the array, delete the file or directory
foreach my $file (@filelist) {
$file =~ s/^*:/$drive:/i;
if ((-d $file) and ($file =~ /^$drive:/i)) {
finddepth (&p_del, "$file");
} elsif ((-f $file) and ($file =~ /^$drive:/i)) {
print "." if defined ($config{verbose});
open (FILE,"$file");
my @filestat = stat(FILE);
close (FILE);
if (defined ($config{test})) {
p_log($config{log},"File $file would have been deleted.n");
++$files;
$totalsize += $filestat[7];
} elsif (unlink($file)) {
p_log($config{log},"File $file was deleted.n");
++$files;
$totalsize += $filestat[7];
}
}
}
}
}
# procedure p_del
# used by File::Find call to delete files or directories
sub p_del {
if (-d $File::Find::name) {
if (defined ($config{test})) {
p_log($config{log},"Directory $File::Find::name would have been removed.n");
++$dirs;
} elsif (rmdir("$File::Find::name")) {
p_log($config{log},"Directory $File::Find::name was removed.n");
++$dirs;
}
print "." if defined ($config{verbose});
} elsif (-f $File::Find::name) {
open (FILE,"$File::Find::name");
my @filestat = stat(FILE);
close (FILE);
if (defined ($config{test})) {
p_log($config{log},"File $File::Find::name would have been deleted.n");
++$files;
$totalsize += $filestat[7];
} elsif (unlink("$File::Find::name")) {
p_log($config{log},"File $File::Find::name was deleted.n");
++$files;
$totalsize += $filestat[7];
}
print "." if defined ($config{verbose});
}
}
# procedure p_log
# manages creating log entries
sub p_log {
my ($logfile,$message) = @_;
my $time = time();
$time = localtime($time);
open (LOG, ">>$logfile") or die "nERROR: could not open $logfile: $^En";
$message =~ s///\/g;
print LOG "$time: $message";
close (LOG);
}
# procedure p_getfreespace
# returns the number of free bytes on a remote drive
sub p_getfreespace {
my ($servername,$drive) = @_;
my $Win32Error = 0;
my $pFree = pack("L2",0,0);
my $pTotal = pack("L2",0,0);
my $pTotalFree = pack("L2",0,0);
my $path = "\\".$servername."\".$drive."$\";

# import Win32API function
ApiLink('kernel32.dll','BOOL GetDiskFreeSpaceEx(
LPCTSTR lpDirectoryName,
PVOID lpFreeBytesAvailable,
PVOID lpTotalNumberOfBytes,
PVOID lpTotalNumberOfFreeBytes)')
or die "nERROR: cannot link to GetDiskFreeSpaceExn";

# make the function call
if (GetDiskFreeSpaceEx($path,$pFree,$pTotal,$pTotalFree)) {
# compute the number of free bytes
my $freespace = p_MakeLargeInt(unpack("L2",$pTotalFree));
my $TotalSpace = p_MakeLargeInt(unpack("L2",$pTotal));
my $SpaceUsed = $TotalSpace - $freespace;
my $PercentageUsed = ($SpaceUsed * 100) / $TotalSpace;

my $FreeSpaceUnit = "bytes";
my $i = 0;
while ($freespace > 1024) {
$freespace = $freespace / 1024;
++$i;
}
if ($i == 1) {
$FreeSpaceUnit = "KB";
} elsif ($i == 2) {
$FreeSpaceUnit = "MB";
} elsif ($i == 3) {
$FreeSpaceUnit = "GB";
} elsif ($i == 4) {
$FreeSpaceUnit = "TB";
} elsif ($i == 5) {
$FreeSpaceUnit = "PB";
}

my $TotalSpaceUnit = "bytes";
$i = 0;
while ($TotalSpace > 1024) {
$TotalSpace = $TotalSpace / 1024;
++$i;
}
if ($i == 1) {
$TotalSpaceUnit = "KB";
} elsif ($i == 2) {
$TotalSpaceUnit = "MB";
} elsif ($i == 3) {
$TotalSpaceUnit = "GB";
} elsif ($i == 4) {
$TotalSpaceUnit = "TB";
} elsif ($i == 5) {
$TotalSpaceUnit = "PB";
}

$freespace = p_FormatNumber($freespace);
$TotalSpace = p_FormatNumber($TotalSpace);
print "There now is ";
printf "%.2f",$freespace;
print " $FreeSpaceUnit available out of ";
printf "%.2f",$TotalSpace;
print " $TotalSpaceUnit (";
printf "%.2f",$PercentageUsed;
print "% used) on the $drive: drive.n";
} else {
$Win32Error = Win32::GetLastError();
my $ErrorMessage = Win32::FormatMessage($Win32Error);
print "\\$servername\$drive$ ERROR $Win32Error: $ErrorMessage";
}

exit $Win32Error;
}
# procedure p_MakeLargeInt
# convert number into a decimal number
sub p_MakeLargeInt {
my($Low,$High) = @_;
return($High*(1+0xFFFFFFFF)+$Low);
}
# procedure p_FormatNumber
# add comas in number to make it more readable
sub p_FormatNumber {
my($Num) = @_;
{} while ($Num =~ s/^(-?d+)(d{3})/$1,$2/);
return($Num);
}

Posted in Perl | Tagged: , , , , , | Leave a Comment »

Host PHP in the Cloud with Windows Azure

Posted by Alin D on August 24, 2010

More than a buzzword in executive meetings, cloud computing is the next big thing in the world of IT. Clouds offer an infinite amount of resources, both on demand and in pay-per-use models: computer resources on tap! In this article, I’ll focus on one of these cloud platforms, Microsoft’s Windows Azure, and give you all the information you need to get started developing PHP applications on this platform. Although we won’t go too deep into the technicalities, I will point you to further information and resources on specific points as we go.

Different Clouds

Choice is a good thing. The great news for us developers is that there are many choices when it comes to cloud computing. Microsoft, Google, Amazon, Rackspace, GoGrid, and many others offer cloud products that have their own special characteristics. It looks like the whole world is dividing these offers into two distinct categories: IaaS (Infrastructure-as-a-Service) and PaaS (Platform-as-a-Service)—the difference between the two is illustrated in Figure 1, “The difference between cloud platforms”.

Figure 1. The difference between cloud platforms
The difference between cloud platforms

First, let’s look at IaaS. Amazon EC2 was the first to offer virtual machines that could run your application. These virtual machines, however, are under your control, like physical servers in your data center. This means that you’re in control of patches, security, maintenance to the operating system—and all with full root or administrator access. The cloud platform takes the infrastructure woes out of your hands, as networking, load balancers, and firewalls are handled for you.

Next, there’s PaaS. This approach is also based on virtual machines, but you don’t have control over them. Instead, a set of tools and APIs is provided to let you package your application and upload it onto your virtual machine, so the only thing you have to worry about is your application. The networking, operating system, and so on are all maintained by the cloud platform.

All cloud vendors share common features, including virtual machines, and storage that’s available through REST-based protocols. Then again, each offering has its own unique features, which is good: clouds are still in a very innovative phase, and as developers we have the luxury of choosing the platform that’s best suited to our particular applications.

Windows Azure Platform Overview

Throughout this article, I’ll be describing the Windows Azure Platform, Microsoft’s PaaS offering to the world of cloud computing. But before we dive into technical details, let’s get a feel for the components included in this offering, and what they do.

Windows Azure

Windows Azure is the core component of the Windows Azure Platform. The marketing folks describe this component as the “operating system for the Azure cloud.” I’m not a big fan of marketing folks and their quotes, but for once, they’re right! Windows Azure is the heart of Microsoft’s offering, and it does what you’d expect of any operating system: it allows you to run your application on a virtual machine, either in a web role (with a web server installed) or in a worker role—a cleaner virtual machine that allows you to host other types of applications.

Windows Azure also allows you to scale up rapidly: simply change a configuration value and you’ll have multiple instances running a the snap of your fingers. Load balancing is taken care of automatically and requires no configuration.

Next to the operating system, a set of storage services is included, which is accessible through a REST-based API. Blob storage allows you to host any file: text files, images, downloads, and more. Table storage is, in essence, a document database that has limited querying possibilities but can scale massively. And then there are queues, which are mostly used for communications between web and worker roles.

Windows Azure is the location where your application will be hosted. A web role will host your web application; you’ll probably use blob storage to store files, and possibly table storage (or SQL Azure, which we’ll discuss in a moment) to store your data. Windows Azure is also used by other components of the platform.

SQL Azure

In addition to hosting, you will probably need a place where you can store your relational data. This is where SQL Azure comes in: it’s a slightly modified version of Microsoft SQL Server that delivers all the services you’d expect from a database: tables, views, indexes, stored procedures, triggers, and so on.

SQL Azure provides database services in a scalable and reliable way. Data is replicated across different sites and made available through a load balancer, giving you a lot of performance on the data layer of your application.

Windows Azure Platform AppFabric

Windows Azure Platform AppFabric is, in essence, a combination of two products. There’s an Access Control Service to which you can delegate the tasks of authentication and authorization of users, and there’s the Service Bus, which, in my opinion, is one of the features that really makes Windows Azure stand out. In short, the service bus allows you to establish communication between two endpoints. That might be a service that publishes messages to a set of subscribers, but the service bus can also be used for punching holes in firewalls!

Imagine having applications A and B, each in different networks, behind different firewalls. No direct communication seems possible, yet the AppFabric service bus will make sure both applications can communicate. There’s no need to open up ports in your company’s firewall to have your cloud application communicate with an on-premises application.

Live Services

Live Services provides an online identity system that you probably already know: Windows Live ID. Live Services also offers features like presence awareness, search, mapping via Bing Maps, synchronization, and more.

Codename Projects: Dallas and Sydney

These products are still in their incubation phases, and will probably undergo some changes in the future. Nevertheless, they already offer some great features. Dallas is basically a Data-as-a-Service solution through which you can subscribe to various sets of data offered in an open format, OData, which is based on REST and Atom. It also provides your business with a new source of revenue: if you’re sitting on a lot of useful data, why not make it available via Dallas and have others pay for using it?

Project Sydney is different: it’s focused on how you communicate with your cloud application. Currently, that communication is completed through the public Internet, but Sydney will allow you to set up a VPN connection to your virtual machines, enabling you to secure communications using your own security certificates and such.

Tools and APIs Available for PHP

When we’re talking about using PHP in a cloud platform like Windows Azure, there are some objectives that we should fulfil before we start to work with the cloud. You’ll need the right tools to build and deploy your application, but also the right APIs—those that allow you to use the platform and all of its features.

Microsoft has been doing a lot of good work in this area. Yes, Windows Azure is a Windows-based platform that seems to target only .NET languages. However, when you look at the tools, tutorials, APIs, and blog posts around PHP and Windows Azure, it is clear that PHP is an equally valued citizen of the platform!

Let’s take a tour of all the tools and APIs that are available for PHP on Windows Azure today. A lot of these tools are very easy to install using the Web Platform Installer—a “check-next-finish” wizard that allows you to install platforms and tools in an easy and efficient manner.

IDE Support

Of course, you can use your favorite editor to work on a PHP application that’ll be hosted on Windows Azure. On the other hand, if you’re using an Eclipse-based editor like Eclipse PDT, Zend Studio, or Aptana, you can take advantage of a great plugin that will speed up your development efforts, shown in Figure 2, “Using Eclipse for development”. The Eclipse plugin for Windows Azure is available at http://windowsazure4e.org. Also, Josh Holmes has prepared a handy post, Easy Setup for PHP on Azure Development.

Figure 2. Using Eclipse for development
Using Eclipse for development

After installing the plugin, you’ll find the following features have been added to your IDE:

  • Project Creation and Migration allows for the easy migration of an existing application to a Windows Azure application. This tool will get your application ready for packaging and deployment to Windows Azure.
  • Storage Explorer provides access to your Windows Azure storage accounts and allows you to upload and download blobs, query tables, list queues, and so on.
  • Debugging and local testing is also included: there’s no need to deploy and test your application immediately on Windows Azure. A “local cloud” simulation environment is available.

Packaging

Once your application is ready for deployment, it should be packaged for Windows Azure. Packaging is basically the process of creating a ZIP archive of your application and embedding a manifest of all the included files and their configuration requirements.

The Eclipse plugin for Windows Azure contains this feature. However, if you don’t use Eclipse as your IDE, or if you’re working in a non-Windows environment, you can package your application using the Windows Azure command-line tools for PHP developers.

Development Tools and SDKs

Next, let’s take a spin around some of the tools and SDKs that Windows Azure makes available to developers.

Windows Azure SDK for PHP

If you’re planning on migrating an application or building a new one for Windows Azure, chances are that you’ll need storage. This is where the Windows Azure SDK for PHP comes in handy: it gives you easy access to the blob storage, table storage and queue services provided by Windows Azure. You can download this SDK as a stand-alone, open-source package that allows you to access storage from both on-premises locations and your cloud application. If you’re using the Eclipse plug-in we discussed earlier, you’ll find this API is included.

The process of utilizing storage always starts with setting up your credentials: an account name and a shared key (think of this as a very long password). Then, you can use one of the specific classes available for blob storage, table storage, or queue storage.

Here’s an example of blob storage in action. First, I create a container (think of this as a virtual hard drive). Then, I upload a file from my local hard drive to blob storage:

/** Microsoft_WindowsAzure_Storage_Blob */
require_once 'Microsoft/WindowsAzure/Storage/Blob.php';

$storageClient = new Microsoft_WindowsAzure_Storage_Blob();
$storageClient->createContainer('testcontainer');

// upload /home/maarten/example.txt to Windows Azure
$result = $storageClient->putBlob('testcontainer', 'example.txt', '/home/maarten/example.txt');

Reading the blob afterwards is fairly straightforward:

/** Microsoft_WindowsAzure_Storage_Blob */
require_once 'Microsoft/WindowsAzure/Storage/Blob.php';

$storageClient = new Microsoft_WindowsAzure_Storage_Blob();

// download file to /home/maarten/example.txt
$storageClient->getBlob('testcontainer', 'example.txt', '/home/maarten/example.txt');

Table storage is a bit more complex. It’s like a very scalable database that’s not bound to a schema, and has limited querying possibilities. To use table storage, you’ll require some classes that can be used both by your PHP application and Windows Azure table storage. Here’s an example class representing a person:

class Person extends Microsoft_WindowsAzure_Storage_TableEntity
{
  /**
   * @azure Name
   */
  public $Name;

  /**
   * @azure Age Edm.Int64
   */
  public $Age;
}

Inserting an instance of Person into the table is as easy as creating a new instance and assigning it some properties. After that, the table storage API in the Windows Azure SDK for PHP allows you to insert the entity into a table named testtable:

/** Microsoft_WindowsAzure_Storage_Table */
require_once 'Microsoft/WindowsAzure/Storage/Table.php';

$entity = new Person('partition1', 'row1');
$entity->Name = "Maarten";
$entity->Age = 25;

$storageClient = new Microsoft_WindowsAzure_Storage_Table('table.core.windows.net', 'myaccount', 'myauthkey');
$storageClient->insertEntity('testtable', $entity);

That was a lot of information in one code snippet! First of all, what are partition1 and row1? Well, those are the partition key and row key. The partition key is a logical grouping of entities. In an application where users can contribute blog posts, for example, a good candidate for the partition key would be the username—this would allow you to easily query for all data related to a given user. The row key is the unique identifier for the row.

Queues follow the same idea—there’s an API that allows you to put, get, and delete messages from the queue on Windows Azure. Queues are also guaranteed to be processed: when a message is read from the queue, data is made invisible for a specific time. If, after that time, the message has not been explicitly removed, for example because a batch script has crashed, the message will re-appear and be available for processing again.

The Windows Azure SDK for PHP also has some extra features that are specific to both PHP and Windows Azure. This includes features like a session storage provider that allows you to share web session data over multiple web role instances. Another feature is a stream wrapper that allows you to use standard file functions like fopen on blob storage.

An example application, ImageCloud, which uses all the features described above, is available for download on my blog.

SQL Server Driver for PHP

The SQL Server Driver for PHP allows PHP developers to access SQL Server databases that are hosted on SQL Server or SQL Azure. The SQL Server Driver for PHP relies on the Microsoft SQL Server ODBC Driver to handle low-level communication with SQL Server. As a result, the SQL Server Driver for PHP is only supported on Windows and Windows Azure. It can be downloaded and installed as a PHP extension.

When you download this driver, be sure to download version 2.0. This version has the additional benefit that it provides PDO (PHP Data Objects) support, which allows you to quickly switch between, for example, MySQL and SQL Server.

Now, let’s imagine you have an SQL Azure database. The following code shows how you can connect to the blog database on your SQL Azure database server and retrieve the posts ordered by publication date:

// Connect to SQL Azure using PDO
$connection = new PDO('bvoj6aovnk.database.windows.net', 'sqladm@bvoj6aovnk', 'mypassword', array('Database' => 'blog'));

// Fetch specific post
$posts = array();
$query = 'SELECT * FROM posts ORDER BY PubDate DESC';
$statement = $connection->query($query);
while ( $row = $statement->fetchObject('Post') ) {
  $posts[] = $row;
}

AppFabric SDK for PHP

As I mentioned before, the Windows Azure Platform AppFabric (not to be confused with the Windows Server AppFabric) enables you to delegate user authentication and authorization, and to punch firewalls and connect applications across different protected networks with ease. You can download it from http://dotnetservicesphp.codeplex.com.

In terms of authentication and authorization, it’s important to know a little about claims-based authentication and federation—a topic on which some interesting resources are available. Basically, your application establishes a trust relationship with an authentication authority (like Windows Azure Platform AppFabric), which means that your application trusts users that are authenticated with that authority. Next, your application will ask its users to claim their rights. For example, my application could ask the user to claim that they can create orders:

$requiredClaims = array('CreateOrder' => true);
if (ValidateClaimUtil::ValidateClaims($requiredClaims, "phpservice", 'http://localhost/SalesDashboard/', $signingKey))
{
  // User is allowed to create an order!
}
else
{
  // User is not authorized.
}

The Windows Azure Platform AppFabric Access Control Service will validate that the user has this claim, and sign a security token with that information. Since your application trusts this authority, it will either continue or fail on the basis of whether or not the claim is valid.

Now magine having two applications that cannot connect to each other because of firewall-related policies. If both applications can establish an outgoing connection to the service bus, the service bus will relay communication between the two applications. It’s as easy as that—and incredibly useful if you have a tough IT department!

Figure 3. The benefits of Windows Azure Platform AppFabric Service Bus
The benefits of Windows Azure Platform AppFabric Service Bus

Showing you example code of how this works would lead us too far (since it would involve some configuration and set up tasks). But if you think this sounds like a great feature, check the AppFabric for PHP website, which contains plenty of tutorials on this matter.

Other Features

In addition all the features and APIs we’ve already investigated, there are a number of other features and products that are worth looking at. These features aren’t always Windows Azure-specific, like the URL rewriting module for IIS7, but your application can benefit greatly from them all the same.

PHP Azure Contributions

The Windows Azure platform provides some useful features, like reading configuration files (which can be modified even after a deployment has been done), or logging into the Windows Azure environment and accessing local storage on a virtual machine to store files temporarily. Unfortunately, these features are baked in to the Windows Azure Cloud Guest OS, and not available as REST services. Luckily however, these features are exposed as a C dynamic link library, which means that writing a PHP extension to interface with them is a logical step. And that’s exactly what the PHP Azure Contributions library provides: a PHP extension to make use of configuration data, logging, and local storage. Imagine having a configuration value named EmailSubject in your ServiceConfiguration.csdef file. Reading this value is very easy using the PHP Azure Contributions extension:

$$emailSubject = azure_getconfig("EmailSubject");

We can also write data to the Windows Azure diagnostics log. Here’s an example in which I’m writing an informational message in the diagnostics log:

azure_log("This is some useful information!", "Information");

The PHP Azure Contributions project is available on CodePlex at http://phpazurecontrib.codeplex.com.

URL Rewriting

As a PHP developer, you may already use URL rewriting. In Apache’s .htaccess files, it’s very easy to enable the rewrite engine, and to rewrite incoming URLs to real scripts. For example, the URL http://www.example.com/products/books may be mapped to http://www.example.com/index.php?page=products&category=books on your server. This technique is also available in IIS7, the Microsoft web server that’s also used in Windows Azure web roles. The above URL rewriting example can be defined in the Web.config file in the root of your Windows Azure application:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <rewrite>
      <rules>
        <rule name="RewriteProductsUrl" enabled="true" stopProcessing="true">
          <match url="^products/([^/]+)/?$" />
          <conditions>
            <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
            <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
          </conditions>
          <action type="Rewrite" url="index.php?page=products&category={R:1}" />
        </rule>
      </rules>
    </rewrite>
  </system.webServer>
</configuration>

Also note that, because your application is hosted on an IIS web server in Windows Azure, you can use any HttpModule for IIS, just as you would for a traditionally hosted application. This makes it easy to enable output compression, leverage the IIS authentication and authorization features, and more. Download the IIS URL Rewrite module from http://www.iis.net/download/urlrewrite.

WinCache Extension

As you may know, PHP files are interpreted into bytecode and executed from that bytecode on every request. This process is quite fast, but on high-traffic websites, it’s recommended that we cache the bytecode and skip script interpretation. This technique increases a website’s performance without requiring additional resources.

On Linux, accelerator modules that utilize these techniques, like APC and IonCube, are very common. These also work on Windows and could potentially also work on Windows Azure. However, Microsoft also released its own module that applies this technique: the WinCache extension for PHP. This extension is the fastest PHP accelerator on Windows, and also provides features like storing session data in this cache layer. The Wincache extension for PHP can be downloaded from http://www.iis.net/download/wincacheforphp.

CDN (Content Delivery Network)

When using Windows Azure blob storage, you’ll find that a full-featured content delivery network (CDN) is available as well. A CDN ensures that, for example, when a user downloads an image, that image will be retrieved from a storage server that’s close to that user’s client. This ensures that the download speed and latency are optimal, and the user receives the image very quickly.

With blob storage, enabling the CDN is as easy as clicking a button. After that, your public containers are replicated to the CDN, which allows your site’s users to retrieve files and resources as swiftly as possible!

Figure 4. Using the Windows Azure CDN
Using the Windows Azure CDN

Domain Name Mapping

With Windows Azure, your application will be assigned a domain name under the cloudapp.net domain—for example, myphpapp.cloudapp.net. I think you’ll agree that this isn’t the greatest URL. It gets even worse when you’re using blob storage for hosting files: myphpappstorage.blob.core.windows.net is, well, just plain ugly!

Luckily, all URLs in Windows Azure can be mapped a custom domain name. So, to map www.myphpapp.com to myphpapp.cloudapp.net, you just need to add a CNAME record to your name server. The same applies to blob storage: storage.myphpapp.com can be mapped to the very long myphpappstorage.blob.core.windows.net through the addition of a CNAME record to your DNS server.

Conclusion

In this article, we’ve taken a snapshot of the Windows Azure platform from a PHP perspective. While I’m slightly biased by having contributed to the Windows Azure SDK for PHP, I do think that the Windows Azure platform is a great choice for hosting PHP applications in a highly-scalable cloud environment. I also feel that there’s great value to be found in features like the Windows Azure AppFabric Service Bus. The bottom line is: I believe that Microsoft is doing their best in making PHP a first-class citizen on their cloud platform.

Posted in Azure | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Microsoft Exchange and Android

Posted by Alin D on August 22, 2010

Microsoft Exchange and Android

Android Platform is an open source software mobile device platform from the Open Handset Alliance. Companies have built various devices with an array of bells and whistles added to phones, netbooks and tablet PCs, all based on the Android Platform. The market place for the Android apps is ever expanding, becoming more comprehensive. What else mobile devices running on Android OS platform can be configured hassle-free to sync well with Microsoft Exchange Server. Thus, helping the users to be always connected with their Outlook mails, notes, calendar, tasks, etc. even when they are on the move. Businesses can accommodate Google Android phones as well as BlackBerry, Palm, Windows Mobile and iPhone devices with hosted Exchange.

To Use an Android Mobile Device with Hosted Exchange, follow the below steps:

From the Home screen, tap the gray Application button.
Tap Email.
Tap Next.
Using the keyboard, enter:
Email Address
Your full email address
Password
Your email password
Tap Next.
Tap Exchange account.
Complete the following fields, and then click Next.
DomainUsername
Your full email address preceded by a backslash
(For example, jane@coolexample.com. Please note that some phones do not require the backslash.)
TIP: If your phone uses the HTC Sense user interface for Android, Domain and Username are two separate fields. The Domain field should be empty. Enter your full email address in the Username field.
Password
Your email password
Exchange Server
webmail.apps4rent.com
Use secure connection SSL
Select this option
Accept all SSL certificates
Select this option
Select your desired Account options, and then tap Next.
Select your desired settings on Set up email, and then tap Done.
Android Platform Releases

Version 1.5 was the first major release to the Android Platform. This consisted of correcting many significant bugs reported by users including email, Bluetooth and multi-media. Version 2.0 brought with it several Google Android API for users to take advantage of, such as account management, sync adapter and Bluetooth. The email support was enhanced to allow for unified management of multiple Google email accounts, Microsoft Exchange and even Facebook. Version 2.2 brought with it the world’s fastest browser as well as an enhanced Android Market Place and flash support for Google Android. Internet connection by hot spot and/or tethering was also improved.

All in all, with Google Android and Microsoft Exchange Server hosting from an application service provider, companies can confidently add the Android Platform to their employee communications options.

Adrian Gates (adrian@apps4rent.com) is a Business Manager with Apps4Rent; which offers Microsoft  Exchange Hosting

Posted in Exchange | Tagged: , , , , , , , , , , , , , , , , | Leave a Comment »

Windows 7 Automation API

Posted by Alin D on August 19, 2010

The Windows Automation API 3.0 combines the advantages of existing accessibility implementations in Microsoft® Active Accessibility® and a modern API design in UI Automation to provide easier migration from and integration with precursor technologies and industry standards.

 

The major improvements in this accessibility framework are:

  • End-to-end unmanaged solution, including a native client API, faster and more robust UI Automation implementations, extensible provider implementations, and Win32 proxy for performance and availability.
  • Harmonization with industry standards such as W3C ARIA and Section 508 specifications.
  • New properties and control patterns.
  • Custom properties, events, and patterns.

An Unmanaged Client API

For Windows 7, UI Automation has a new COM client API for use from unmanaged and managed clients alike. Unmanaged applications can now use UI Automation without changing languages or loading the CLR, while simultaneously benefiting from all the latest features. This new API is similar to the previous managed API, but is friendlier to C++ developers.

The heart of the new COM API is the IUIAutomation interface, which enables clients to get automation elements, register event handlers, create objects, and access other helper methods. To begin using UI Automation, simply include the Automation header (UIAutomation.h), and then CoCreate the Automation object:

IUIAutomation * pAutomation;
CoCreateInstance(__uuidof(CUIAutomation), NULL,
                 CLSCTX_INPROC_SERVER,
                 __uuidof(IUIAutomation),
                 (void **)&pAutomation);

Once you have the Automation object, you can discover the entire user interface (UI). The UI is modeled as a tree of automation elements (IUIAutomationElement objects), each element representing single pieces of UI: a button, a window, the desktop, and so on. The IUIAutomationElement interface has methods relevant to all controls, such as checking properties or setting focus. Here, you get the element with focus and its name:

IUIAutomationElement *pElement;
pAutomation->GetFocusedElement(&pElement);
BSTR name;
pElement->get_CurrentName(&name);
std::wcout << L“Focused Name:” << name << L“n”;

Furthermore, elements can expose functionality specific to a particular control through control patterns. Control patterns are collections of associated properties, events, and methods, and more than one can apply to an element. Here, I use the Invoke pattern to press a button:

IUIAutomationInvokePattern * pInvoke;
pElement->GetCurrentPatternAs(UIA_InvokePatternId,
             __uuidof(IUIAutomationInvokePattern),
             (void **)&pInvoke);
pInvoke->Invoke();

To navigate the tree of automation elements, you can use tree walkers. Here, I create an IUIAutomationTreeWalker object to navigate to the first child of an element in the Control View of the tree:

IUIAutomationTreeWalker * pWalk;  
pAutomation->get_ControlViewWalker(&pWalk);
IUIAutomationElement * pFirst;
pWalk->GetFirstChildElement(pElement, &pFirst);

To address performance issues when communicating across processes, clients can fetch multiple properties and patterns at a time. Here, I create a cache request (IUIAutomationCacheRequest object) from the Automation object, identify the property (Automation ID property) to prefetch, cache it for an element, and then get the cached ID:

IUIAutomationCacheRequest * pCR;
pAutomation->CreateCacheRequest(&pCR));
pCR->AddProperty(UIA_AutomationIdPropertyId);

IUIAutomationElement * pCachedElement;
pElement->BuildUpdatedCache(&pCachedElement);

BSTR autoID;
pCachedElement->get_CachedAutomationId(&autoID);
std::wcout << L“Cached ID:” << autoID << L“n”;

With UI Automation property conditions, looking for specific values for a property is easy; you can combine conditions with AND, OR, and NOT operators to search for UI scenarios. Here, I search for a check box:

IUIAutomationCondition * pCheckBoxProp;
VARIANT varCheckBox;
varCheckBox.vt = VT_I4;
varCheckBox.lVal = UIA_CheckBoxControlTypeId;
pAutomation->CreatePropertyCondition(
    UIA_ControlTypePropertyId,
    varCheckBox, &pCheckBoxProp);

IUIAutomationElement * pFound;
pElement->FindFirst(TreeScope_Descendants,
                    pCheckBoxProp, &pFound);

In addition to these features, UI Automation provides custom proxy registration, events, and the Text Pattern for manipulating documents.

Proxy Factory

Clients can customize how UI Automation sees HWND-based controls by registering customized providers with a proxy factory implementation. Custom proxies affect only the client’s own view. The provider answers CreateProvider calls with the HWND to be proxied, as well as the idObject and idChild associated with the WinEvent, if needed.

Here, I verify the expected values of OBJID_CLIENT and CHILDID_SELF and create a custom proxy:

IFACEMETHODIMP CustomFactory::CreateProvider(
    UIA_HWND hwnd, LONG idObject, LONG idChild,
    IRawElementProviderSimple **ppRetVal)
{
    *ppRetVal = NULL;
    if(idObject == OBJID_CLIENT &&
       idChild  == CHILDID_SELF)
    {
        // Create the custom proxy.
        *ppRetVal = new CustomProxy((HWND)hwnd);
    }
    return S_OK;
}

Proxy factory registration consists of creating proxy factory entries and inserting them into a proxy factory mapping. The mappings are processed in order as clients register proxies. In this example, I create the proxy for all HWNDs with a class name containing the word “MYCUSTOMBUTTON”:

// Instantiate the proxy factory.
IUIAutomationProxyFactory * pCF =
    new CustomFactory();

// Create an entry with the factory.
IUIAutomationProxyFactoryEntry * pCEnt;
pAutomation->CreateProxyFactoryEntry(pCF, &pCEnt);
pCEnt –>put_ClassName(L“MYCUSTOMBUTTON”);
pCEnt –>put_AllowSubstringMatch(TRUE);

// Get the mapping.
IUIAutomationProxyFactoryMapping * pMapping;
pAutomation->get_ProxyFactoryMapping(&pMapping);

// Insert the entry at the start of the mapping.
pMapping->InsertEntry(0, pCEnt);

Optimizations in the Event Model

UI Automation clients, like screen readers, track events raised in the UI by UI Automation providers and notify users of the events. To improve efficiency in Windows 7, providers can raise an event selectively, notifying only those clients that subscribe to that event.

Common UI Automation events that providers should support include the following:

  • Changes in a property or control pattern for a UI Automation element.
  • End user or programmatic activity that affects the UI.
  • Changes to the structure of the UI Automation tree.
  • Shift in focus from one element to another.

To listen to an event, create a COM object implementing the event handler interface, and then call the corresponding method on the IUIAutomation object to subcribe the handler to the event. Here, I subscribe a handler to the Focus Changed event:

class FocusChangedHandler:
    public IUIAutomationFocusChangedEventHandler;

FocusChangedHandler * focusHandler =
    new FocusChangedHandler();

// Register the event with default cache request.
pAutomation->AddFocusChangedEventHandler(
    NULL, focusHandler);


Interoperability with Microsoft Active Accessibility-based Controls

To improve interoperability, UI Automation translates between Microsoft Active Accessibility and UI Automation implementations. UI Automation clients can use the new UI Automation services to interact with earlier Microsoft Active Accessibility implementations, and Microsoft Active Accessibility clients can interact with UI Automation providers.

In Windows 7, Microsoft Active Accessibility implementations can add UI Automation properties and control patterns by supporting the IRawElementProviderSimple interface to expose patterns and properties and the IAccessibleEx interface to handle ChildIds. Figure 1 shows the relationship between the IAccessible, IAccessibleEx, IRawElementProviderSimple, and IRangeValueProvider interfaces.

With the IAccessibleEx interface, developers can extend Microsoft Active Accessibility implementations by adding required UI Automation object model information. The new MSAA-to-UIA proxy featured in the Windows Automation API provides a variation of a UI Automation “provider” to Microsoft Active Accessibility implementations. UI Automation clients can interact with all variations of UI Automation implementations: native UI Automation, IAccessible (Microsoft Active Accessibility), and IAccessible + IAccessibleEx.

New Win32 Support via OLEACC Plus IAccessibleEx

In Windows 7, OLEACC proxies expose information about common controls that Microsoft Active Accessibility cannot express. The MSAA-to-UIA proxy recognizes IAccessibleEx and forwards this additional information to UI Automation. For example, the OLEACC slider proxy adds the IRangeValue pattern, exposing minimum and maximum values that Microsoft Active Accessibility cannot expose. Extending the OLEACC proxies with IAccessibleEx has the dual benefit of leveraging existing code and keeping the OLEACC proxies up to date.

Some frameworks such as HTML, Windows Presentation Foundation (WPF), and Silverlight™ have a metadata system that can associate accessibility-related properties with an element. Consequently, developers can easily fix common bugs such as an incorrect accessibility name. However, Win32 does not have a similar feature, and the very basic property system for HWNDs does not apply to sub-items.

However, with Direct Annotation, developers can mark up properties on Win32 controls with accessibility property/value information, enabling developers to fix bugs without needing a full UI Automation or Microsoft Active Accessibility implementation. Here, I set the AutomationId on a simple control with the hwnd:

IAccPropServices *pAccPropServices;
CoCreateInstance(CLSID_AccPropServices, NULL,
    CLSCTX_SERVER, IID_IAccPropServices,
    (void**)&pAccPropServices);
pAccPropServices->SetHwndPropStr(hwnd,
    OBJID_CLIENT, CHILDID_SELF,
    AutomationId_Property_GUID, L“Foo ID”);
Custom Control Patterns, Properties, and Events

With Windows 7, you can extend the platform with custom control patterns, properties, and events. Because of this support, developers of UI Automation clients and providers can introduce new accessibility specifications independent of future operating system releases.

Developers register and use custom patterns, properties, and events on both the client side and the provider side. If a provider registers for a property that the client hasn’t, the client can’t retrieve it, and if a client registers for a property that the provider hasn’t, the provider can’t respond to a request for it. Neither of these cases causes errors; one side merely remains unaware of the other’s capabilities.

Once the new property, pattern, or event is registered, using it is just like using a built-in pattern, property, or event.

Custom Properties

Registration of properties is identical for both client and provider. In the following sample, I have a native UI Automation control and I want to add a custom string property, LongName:

// The PropertyId for the LongName property.
PROPERTYID longNamePropertyId;

// This is the predefined property GUID,
// the name of the property, and the type.
UIAutomationPropertyInfo longNamePropertyInfo = {
    GUID_LongNameProp,
    L“LongName”,
    UIAutomationType_String };

// This yields the property ID for new property.
pAutomationRegistrar->RegisterProperty(
    &longNamePropertyInfo, &longNamePropertyId);
 

Retrieving the property is similar to retrieving a normal property; you use the property ID that the RegisterProperty method initialized:

VARIANT longNameValue;
pElement->GetCurrentPropertyValue(
    longNamePropertyId, &longNameValue);
std::wcout << longNameValue.bstrVal << L“n”;
 

Again, on the provider side, you call the same registration method except you add an entry in the control’s implementation of IRawElementProviderSimple::GetPropertyValue.

In this code sample, the UI Automation implementation keeps a reference to the control in _pOurControl, and the control supports a GetLongName method:

GetPropertyValue(PROPERTYID propertyId,
                 VARIANT * pRet)
{
    pRet->vt = VT_EMPTY;

    if(propertyId == longNamePropertyId)
    {
        // Get the long name from the control.
        pRet->bstrVal = _pControl.GetLongName();
        pRet->vt = VT_BSTR;
    }

    // Deal with other properties…
    return S_OK
}
Custom Events

Events follow a nearly identical model to properties: you register the event ID, and then you can use the custom EventID exactly as you use a normal EventID. However, custom events cannot have arguments; they are merely notifications that the event occurred.

Here, I register the event. Both listening and raising would then be identical to other events:

// The EventId for the ThingHappened event
EVENTID thingHappenedEventId;

// Event Information
// This is the event GUID, the name of the event.
UIAutomationEventInfo thingEventInfo = {
    GUID_ThingHappenedEvent, L“ThingHappened” };

// This gives you the Event ID for the new event.
pAutomationRegistrar->RegisterEvent(
    &thingEventInfo, &thingHappenedEventId);
Custom Control Patterns

Creating a custom control pattern requires the following:

  • Arrays of events, properties, and methods associated with the pattern.
  • IIDs of the pattern’s corresponding provider interface and client interface.
  • Code to create a client interface object.
  • Code to perform marshalling for the pattern’s properties and methods.

On the client side, the code that registers a pattern must supply a factory for creating instances of a Client Wrapper that forwards property requests and method calls to an IUIAutomationPatternInstance provided by UI Automation. The UI Automation framework then takes care of remoting and marshalling the call.

Click for a larger version of this image.
Figure 1: COM diagram shows the role of IAccessibleEx in extending legacy implementations.

On the provider side, the code that registers a pattern must supply a “pattern handler” object that performs the reverse function of the Client Wrapper. The UI Automation framework forwards the property and method requests to the pattern handler object, which in turn calls the appropriate method on the target object’s provider interface.

The UI Automation framework takes care of all communication between the client and provider, both of which register corresponding control pattern interfaces. For more details please refer to the Windows 7 SDK.

In Windows 7, Microsoft Active Accessibility implementations can add UI Automation properties and control patterns by supporting the IRawElementProviderSimple interface to expose patterns and properties and the IAccessibleEx interface to handle ChildIds. Figure 1 shows the relationship between the IAccessible, IAccessibleEx, IRawElementProviderSimple, and IRangeValueProvider interfaces.

Click for a larger version of this image.
Figure 1: COM diagram shows the role of IAccessibleEx in extending legacy implementations.

With the IAccessibleEx interface, developers can extend Microsoft Active Accessibility implementations by adding required UI Automation object model information. The new MSAA-to-UIA proxy featured in the Windows Automation API provides a variation of a UI Automation “provider” to Microsoft Active Accessibility implementations. UI Automation clients can interact with all variations of UI Automation implementations: native UI Automation, IAccessible (Microsoft Active Accessibility), and IAccessible + IAccessibleEx.

New Win32 Support via OLEACC Plus IAccessibleEx

In Windows 7, OLEACC proxies expose information about common controls that Microsoft Active Accessibility cannot express. The MSAA-to-UIA proxy recognizes IAccessibleEx and forwards this additional information to UI Automation. For example, the OLEACC slider proxy adds the IRangeValue pattern, exposing minimum and maximum values that Microsoft Active Accessibility cannot expose. Extending the OLEACC proxies with IAccessibleEx has the dual benefit of leveraging existing code and keeping the OLEACC proxies up to date.

Some frameworks such as HTML, Windows Presentation Foundation (WPF), and Silverlight™ have a metadata system that can associate accessibility-related properties with an element. Consequently, developers can easily fix common bugs such as an incorrect accessibility name. However, Win32 does not have a similar feature, and the very basic property system for HWNDs does not apply to sub-items.

However, with Direct Annotation, developers can mark up properties on Win32 controls with accessibility property/value information, enabling developers to fix bugs without needing a full UI Automation or Microsoft Active Accessibility implementation. Here, I set the AutomationId on a simple control with the hwnd:

IAccPropServices *pAccPropServices;
CoCreateInstance(CLSID_AccPropServices, NULL,
    CLSCTX_SERVER, IID_IAccPropServices,
    (void**)&pAccPropServices);
pAccPropServices->SetHwndPropStr(hwnd,
    OBJID_CLIENT, CHILDID_SELF,
    AutomationId_Property_GUID, L“Foo ID”);

Custom Control Patterns, Properties, and Events

With Windows 7, you can extend the platform with custom control patterns, properties, and events. Because of this support, developers of UI Automation clients and providers can introduce new accessibility specifications independent of future operating system releases.

Developers register and use custom patterns, properties, and events on both the client side and the provider side. If a provider registers for a property that the client hasn’t, the client can’t retrieve it, and if a client registers for a property that the provider hasn’t, the provider can’t respond to a request for it. Neither of these cases causes errors; one side merely remains unaware of the other’s capabilities.

Once the new property, pattern, or event is registered, using it is just like using a built-in pattern, property, or event.

Custom Properties

Registration of properties is identical for both client and provider. In the following sample, I have a native UI Automation control and I want to add a custom string property, LongName:

// The PropertyId for the LongName property.
PROPERTYID longNamePropertyId;

// This is the predefined property GUID,
// the name of the property, and the type.
UIAutomationPropertyInfo longNamePropertyInfo = {
    GUID_LongNameProp,
    L“LongName”,
    UIAutomationType_String };

// This yields the property ID for new property.
pAutomationRegistrar->RegisterProperty(
    &longNamePropertyInfo, &longNamePropertyId);

 

Retrieving the property is similar to retrieving a normal property; you use the property ID that the RegisterProperty method initialized:

VARIANT longNameValue;
pElement->GetCurrentPropertyValue(
    longNamePropertyId, &longNameValue);
std::wcout << longNameValue.bstrVal << L“n”;

 

Again, on the provider side, you call the same registration method except you add an entry in the control’s implementation of IRawElementProviderSimple::GetPropertyValue.

In this code sample, the UI Automation implementation keeps a reference to the control in _pOurControl, and the control supports a GetLongName method:

GetPropertyValue(PROPERTYID propertyId,
                 VARIANT * pRet)
{
    pRet->vt = VT_EMPTY;

    if(propertyId == longNamePropertyId)
    {
        // Get the long name from the control.
        pRet->bstrVal = _pControl.GetLongName();
        pRet->vt = VT_BSTR;
    }

    // Deal with other properties…
    return S_OK
}

Custom Events

Events follow a nearly identical model to properties: you register the event ID, and then you can use the custom EventID exactly as you use a normal EventID. However, custom events cannot have arguments; they are merely notifications that the event occurred.

Here, I register the event. Both listening and raising would then be identical to other events:

// The EventId for the ThingHappened event
EVENTID thingHappenedEventId;

// Event Information
// This is the event GUID, the name of the event.
UIAutomationEventInfo thingEventInfo = {
    GUID_ThingHappenedEvent, L“ThingHappened” };

// This gives you the Event ID for the new event.
pAutomationRegistrar->RegisterEvent(
    &thingEventInfo, &thingHappenedEventId);

Custom Control Patterns

Creating a custom control pattern requires the following:

  • Arrays of events, properties, and methods associated with the pattern.
  • IIDs of the pattern’s corresponding provider interface and client interface.
  • Code to create a client interface object.
  • Code to perform marshalling for the pattern’s properties and methods.

On the client side, the code that registers a pattern must supply a factory for creating instances of a Client Wrapper that forwards property requests and method calls to an IUIAutomationPatternInstance provided by UI Automation. The UI Automation framework then takes care of remoting and marshalling the call.

 

New Properties and Control Patterns

Some other useful properties for automation elements have been added to UI Automation for Windows 7:

ControllerFor is an array of elements manipulated by the automation element. Without this property it is hard to determine the impact of an element’s operation.

DescribedBy is an array of elements that provide more information about the automation element. Instead of using the object model to discover information about the element, clients can quickly access that information in the DescribedBy property.

FlowsTo is an array of elements that suggest the reading order after the current automation element. FlowsTo is used when automation elements are not exposed or structured in the reading order users perceive.

IsDataValidForForm identifies whether data is valid in a form.

ProviderDescription identifies source information for the automation element’s UI Automation provider, including proxy information.

Control Patterns for Virtualized Child Objects

When a control has too many children to load at once, a common solution is to treat the excess children as virtualized controls. This creates problems for the UI Automation tree map because there are only a handful of real controls and the virtualized controls simply don’t exist in the UI Automation tree.

To manage this, UI Automation offers two control patterns. The ItemContainer pattern lets a user search a container of virtualized controls for specific properties. This gives the client a reference to a virtualized control, but the user can’t do anything with it. The VirtualizedItem pattern enables the client to force the item to exist, either by realizing it internally, or by having it scroll on screen.

In this client-side code example, I search for a specifically named item in a virtualized list:

// Get the ItemContainer pattern.
IUIAutomationItemContainerPattern * pContainer;
pElement->GetCurrentPatternAs(
  UIA_ItemContainerPatternId,
  __uuidof(IUIAutomationItemContainerPattern),
  (void**)&pContainer));

// Search the container for the property.
VARIANT varNameStr;
varNameStr.vt = VT_BSTR;
varNameStr.bstrVal = SysAllocString(name);

IUIAutomationElement * pFound;
pContainer->FindItemByProperty(NULL,
  UIA_NamePropertyId, varNameStr, &pFound);

// Realize the virtual element.
IUIAutomationVirtualizedItemPattern * pVirt;
pFoundElement->GetCurrentPatternAs(
  UIA_VirtualizedItemPatternId,
  __uuidof(IUIAutomationVirtualizedItemPattern),
  (void**)&pVirt));

pVirtualizedItem->Realize();

New UI Automation Properties for Accessible Rich Internet Applications (ARIA)

Every day, Web sites are increasing their utility with dynamic content and advanced UI controls by using technologies like Asynchronous JavaScript and XML (AJAX), HTML, and JavaScript. However, assistive technologies are frequently unable to interact with these complex controls or expose dynamic content to users. Accessible Rich Internet Applications (ARIA) is a W3C technical specification for developing Web content and applications so that they are accessible to people with disabilities.

To support the ARIA specification, the UI Automation specification enables developers to associate UI Automation AriaRole and AriaProperties attributes with W3C ARIA Roles, States, or Properties. This helps user applications such as Internet Explorer support the ARIA object model in the context of UI Automation while keeping a baseline accessibility object model.

Some parts of the ARIA specification can be mapped to the desktop-oriented Microsoft Active Accessibility object model; however, much of the specification can only be applied to rich internet applications. Table 1 lists some examples of mappings from W3C ARIA Roles to Microsoft Active Accessibility Roles and UI Automation Control Types.

For example, the ARIA Role checkbox is supported in Microsoft Active Accessibility by the role ROLE_SYSTEM_CHECKBUTTON and in UI Automation by the combination of control type Checkbox and AriaRole checkbox. The ARIA state checked is supported in Microsoft Active Accessibility by the state STATE_SYSTEM_CHECKED and in UI Automation by the control pattern Toggle Pattern and the AriaProperties property checked.

ARIA States and Properties are supported by the UI Automation AriaProperties property with the following exceptions: ARIA properties that take object references (like the describedby property), and ARIA properties already supported by the accessibility object model. Table 2 lists examples of mappings from W3C ARIA States and Properties to various properties and functions of Microsoft Active Accessibility and UI Automation.

Conclusion

With application user interfaces growing more and more complex, getting accessibility right is a challenge for developers. Programmatic access to the UI is critical in the development of assistive technologies like screen readers and magnifiers. To address this, the Windows 7 Automation API aims to provide a complete end-to-end, flexible, extensible, and consistent framework with improved design and performance.

Table 1: W3C ARIA Roles can be mapped to Microsoft Active Accessibility roles and UI Automation control types and AriaRole properties.

W3C ARIA Role MSAA Role UIA Control Type UIA AriaRole Property
button ROLE_SYSTEM_PUSHBUTTON button button
checkbox ROLE_SYSTEM_CHECKBUTTON Checkbox checkbox
combobox ROLE_SYSTEM_COMBOBOX Combobox combobox
grid ROLE_SYSTEM_TABLE DataGrid grid
gridcell ROLE_SYSTEM_CELL DataItem gridcell
group ROLE_SYSTEM_GROUPING Grouping group
img ROLE_SYSTEM_GRAPHIC Image img
link ROLE_SYSTEM_LINK HyperLink link
list ROLE_SYSTEM_LIST List list
menu ROLE_SYSTEM_MENUPOPUP Menu menu
presentation ROLE_SYSTEM_PANE Pane presentation
progressbar ROLE_SYSTEM_PROGRESSBAR ProgressBar progressbar
radio ROLE_SYSTEM_RADIOBUTTON RadioButton radio
slider ROLE_SYSTEM_SLIDER Slider slider
tooltip ROLE_SYSTEM_TOOLTIP Tooltip tooltip
tree ROLE_SYSTEM_OUTLINE Tree tree
treegrid ROLE_SYSTEM_TABLE DataGrid treegrid
Table 2: W3C ARIA States and Properties can be mapped to Microsoft Active Accessibilty properties and UI Automation control patterns and AriaProperties properties.

W3C ARIA States and Properties Microsoft Active Accessibility UI Automation Control Patterns and Properties UI Automation AriaProperties Property
checked STATE_SYSTEM_CHECKED Toggle Pattern, checked checked
controls n/a ControllerFor n/a
describedby n/a DescribedBy n/a
disabled STATE_SYSTEM_UNAVAILABLE IsEnabled False disabled
flowto n/a FlowsTo n/a
invalid n/a IsDataInvalidForForm invalid
labelledby n/a LabeledBy n/a
live n/a n/a live
multiselectable STATE_SYSTEM_EXTSELECTABLE CanSelectMultiple multiselectable
readonly STATE_SYSTEM_READONLY IsReadOnly readonly
required STATE_REQUIRED IsRequiredForForm required
secret STATE_SYSTEM_PROTECTED IsPassword secret
valuemax n/a Maximum Property in RangeValue Pattern valuemax
valuemin n/a Minimum Property in RangeValue Pattern valuemin
valuenow IAccessible::get_accValue Value Property in RangeValue Pattern valuenow
       

Posted in Windows 7 | Tagged: , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Windows Azure Storage

Posted by Alin D on August 17, 2010

Windows Azure Overview

Before I begin to build the application, a quick overview of Windows Azure and Roles is necessary. There are many resources available to describe these, so I wouldn’t go into a lot of detail here.

Windows Azure is Microsoft’s Cloud Computing offering that serves as the development, service host, and service management environment for the Windows Azure Platform. The Platform is comprised of three pieces: Windows Azure, SQL Azure, and AppFabric.

  • Windows Azure: Cloud-based Operating System which provides a virtualized hosting environment, computing resources, and storage.
  • SQL Azure: Cloud-based relational database management system that includes reporting and analytics.
  • AppFabric: Service bus and access control for connecting distributed applications, including both on-premise and cloud applications.

Windows Azure Roles

Unlike security related roles that most developers may be familiar with, Windows Azure Roles are used to provision and configure the virtual environment for the application when it is deployed. The figure below shows the Roles currently available in Visual Studio 2010.

Roles

Except for the CGI Web Role, these should be self-explanatory. The CGI Web Role is used to provide an environment for running non-ASP.NET web applications such as PHP. This provides a means for customers to move existing applications to the cloud without the cost and time associated with rewriting them in .NET.

Building the Azure application

The first step is, of course, to create the Windows Azure application to use for this demonstration. After the prerequisites have been installed and configured, you can open Visual Studio and take the normal path to create a new project. In the New Project dialog, expand the Visual C# tree, if not already, and click Cloud. You will see one template available, Windows Azure Cloud Service. Note that although .NET Framework 4 is selected, Windows Azure does not support 4.0 yet and the projects will default to .NET Framework 3.5.

New Project

After selecting this template, the New Cloud Service Project dialog will be displayed, listing the available Windows Azure Roles. For this application, select an ASP.NET Web Role and a Worker Role. After the roles have been added to the Cloud Service Solution list, you can rename them by hovering over the role to display the edit link. You can, of course, add additional Roles after the solution has been created.

Cloud Service Project

After the solution has been created, you will see three projects in the Solution Explorer.

Solution Explorer

As this article is about Azure Storage rather than Windows Azure itself, I’ll briefly cover some of the settings but leave more in-depth coverage for other articles or resources.

Under the Roles folder, you can see two items, one for each of the roles that were added in the previous step. Whether you double click the item or right-click and select Properties from the context menu, it will open the Properties page for the given role. The below image is for the AzureStorageWeb Role.

Properties

The first section in the Configuration tab is to select the trust level for the application. These settings should be familiar to most .NET developers. The Instances section tells the Windows Azure Platform how many instances to of this role to create and the size of the Virtual Machine to provision. If this Web Role were for a high volume web application, then selecting a high number of instances would improve its availability. Windows Azure will handle the load balancing for all of the instances that are created. The VM sizes are as follows:

  • Small: 1 core processor, 1.75 GB RAM, 250 GB hard drive
  • Medium: 2 core processor, 3.5 GB RAM, 500 GB hard drive
  • Large: 4 core processor, 7 GB RAM, 1000 GB hard drive
  • Extra large: 8 core processor, 15 GB RAM, 2000 GB hard drive

The Startup action is specific to Web Roles and, as you can see, allows you to designate whether the application is accessed via HTTP or HTTPS.

Settings

The Settings tab should be familiar to .NET developers, and is were any additional settings for the application can be created. Any settings added here will be placed in the ServiceConfiguration and ServiceDefinition files since they apply to the service itself, not specifically to a role project. Of course, the projects also have the web.config and app.config files that are specific to them.

EndPoints

The EndPoints tab allows you to configure the endpoints that will be configured and exposed for the Role. In this case, the Web Role can be configured for HTTP or HTTPS with a specific port and SSL certificate if appropriate.

EndPoints

As you can see here, the Worker Role has a different Endpoints screen. The types available from the dropdown are Input and Internal, and the Protocol dropdown includes, http, https, and tcp as the default. This allows you to connect to the Worker Role via any of these protocols and expose the functionality externally if necessary.

Web Role

Since this article is meant to focus on Azure Storage, I’ll keep the UI simple. However, thanks to JQuery and some styles, a simple interface can still look good for little effort.

UI

There is nothing special about the web application, it is just like any other web app you have built. There is one class that is unique to Azure, however, the WebRole class. All Roles in Windows Azure must have a class that derives from RoleEntryPoint. This class is used by Windows Azure to initialize and control the application. The default implementation provides an override for the OnStart method and assigns a handler for the RoleEnvironmentChanging event. This will allow the Role to be restarted if the configuration changes, such as increasing the instance count or adding a new setting. If there were other actions necessary to be taken before the application started, they should be handled here. Likewise, the Run and OnStop methods can be overridden to perform an action before the application is run and before it is stopped, respectively.

Collapse
public override bool OnStart()
{
    DiagnosticMonitor.Start("DiagnosticsConnectionString");

    // For information on handling configuration changes
    // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
    RoleEnvironment.Changing += RoleEnvironmentChanging;

    return base.OnStart();
}

private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
    // If a configuration setting is changing
    if(e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))
    {
        // Set e.Cancel to true to restart this role instance
        e.Cancel = true;
    }
}

Azure Storage

As I’ve said, there are three types of storage available with the Windows Azure Platform: blob, table, and queue.

Blob Storage

Binary Large Object, or blob, should be familiar to most developers and is used to store things like images, documents, or videos; something larger than a name or ID. Blob storage is organized by containers that can have two types of blob: Block and Page. The type of blob needed depends on its usage and size. Block blobs are limited to 200 GB, while Page blobs can go up to 1 TB. Note, however, that in development, storage blobs are limited to 2 GB. Blob storage can be accessed via RESTful methods with a URL such as: http://myapp.blob.core.windows.net/container_name/blob_name.

Although blob storage isn’t hierarchical, it can be simulated by the name. Blob names can use /, so you can have names such as:

Here it appears that the blobs are organized by year, month, and day; however, in reality, the names of blobs are like 2009/10/4/photo1, 2009/10/4/photo2, and 2008/6/25/photo1.

Block Blob

Although a Block blob can be up to 200 GB, if it is larger than 64 MB, it must be sent in multiple chunks of no more than 4 MB. Storing a Block blob is also a two-step process; the block must be committed before it becomes available. When a Block blob is sent in multiple chunks, they can be sent in any order. The order in which the Commit call is made determines how the blob is assembled. Thankfully, as we’ll see later, the Azure Storage API hides these details so you won’t have to worry about them unless you want to.

Page Blob

A Page blob can be up to 1 TB in size, and is organized into 512 byte pages within the block. This means any point in the blob can be accessed for read or write operations by using the offsite from the start of the blob. This is the advantage to using a Page blob rather than a Block blob, which can only be accessed as a whole.

Table Storage

Azure tables are not like tables from an RDBMS like SQL server. They are composed of a collection of entities and properties, with properties further containing collections of name, type, and value. The thing to realize, and what may cause a problem for some developers, is that Azure tables can’t be accessed using ADO.NET methods. As with all other Azure storage methods, RESTful access is provided: http://myapp.table.core.winodws.net/TableName.

I’ll cover tables in-depth later when getting to the actual code.

Queue Storage

Queues are used to transport messages between applications, Azure based or not. Think of Microsoft Messaging Queue, MSMQ, for the cloud. As with the other storage type, RESTful access is available as well: http://myapp.queue.core.windows.net/Queuename.

Queue messages can only be up to 8 KB; remember, it isn’t meant to transport large objects, only messages. However, the message can be a URI to a blob or table entity. Where Azure Queues differ from traditional queue implementations is that it is not a FIFO container. This means, the message will remain in the queue until explicitly deleted. If a message is read by one process, it will be marked as invisible to other processes for a variable time period, which defaults to 30 seconds, and can be no more than 2 hours; if the message hasn’t been deleted by then, it will be returned to the queue and will be available for processing again. Because of this behavior, there is also no guarantee that messages will be in any particular order.

Building the Storage Methods

To start with, I’ll add another project to the solution, a Class Library project. This project will serve as a container for the storage methods and implementation used in this solution. After creating the project, you’ll need to add references to the Windows Azure Storage assembly Microsoft.WindowsAzure.StorageClient.dll, which can be found in the Windows Azure SDK folder, C:Program FilesWindows Azure SDKv1.1ref StorageBase.

Since a CloudStorageAccount is necessary for any access, I’ll create a base class to contain a property for it.

Collapse
public static CloudStorageAccount Account
{
    get
    {
        // For development this can be used
        //return CloudStorageAccount.DevelopmentStorageAccount;
        // or this so code doesn't need to be changed before deployment
        return CloudStorageAccount.FromConfigurationSetting("DiagnosticsConnectionString");
    }
}

You’ll see here that we can use two methods to return the CloudStorageAccount object. Since the application is being run in a development environment, we could use the first method and return the static property DevelopmentStorageAccount. However, before deployment, this would need to be updated to an actual account. Using the second method, however, the account information can be retrieved from the configuration file, similar to database connection strings in an app.config or web.config file. Before the FromConfigurationSetting method can be used though, we must add some code to the OnStart method of the WebRole class.

Collapse
// This code is necessary to use CloudStorageAccount.FromConfigurationSetting
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
    configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
    RoleEnvironment.Changed += (sender, arg) =>
    {
        if(arg.Changes.OfType<roleenvironmentconfigurationsettingchange>()
            .Any((change) => (change.ConfigurationSettingName == configName)))
        {
            if(!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
            {
                RoleEnvironment.RequestRecycle();
            }
        }
    };
});

This code basically tells the runtime to use the configuration file for setting information, and also sets an event handler for the RoleEnvironment.Changed event to detect any changes to the configuration file. If a change is detected, the Role will be restarted so those changes can take effect. This code also makes the default RoleEnvironment.Changing event handler implementation unnecessary since they both do the same thing, restarting the role when a configuration change is made.

Implementing Blob Storage

The first thing we need is a reference to a CloudBlobClient object to access the methods. As you can see, there are two ways to do this. Both produce the same result; one is just less typing, but gives more control over the creation.

Collapse
public static CloudBlobClient Client
{
    get
    {
        //return new CloudBlobClient(Account.BlobEndpoint.AbsoluteUri,
        //                           Account.Credentials);

        // More direct method
        return Account.CreateCloudBlobClient();
    }
}

Uploading the blob is a relatively easy task.

Collapse
public void PutBlobBlock(Stream stream, string fileName)
{
    // This method returns true if the container did not exist and was created
    // but for this purpose it doesn't matter.
    Client.GetContainerReference(CONTAINER_NAME).CreateIfNotExist();

    // Now that the container has been created if necessary
    // we can upload the blob
    Client.GetContainerReference(CONTAINER_NAME)
        .GetBlobReference(fileName)
        .UploadFromStream(stream);
}

As you can see, the first step is to retrieve a reference to the container. The CreateIfNotExist method is a convenience that, as the name implies, will create the container if it doesn’t already exist. An alternative approach would be as follows:

Collapse
CloudBlobContainer container = Client.GetContainerReference(CONTAINER_NAME);
if(container == null)
{
    Client.GetContainerReference(CONTAINER_NAME).Create();
}

After you have a reference to the container, the next step is to get a reference to the blob. If a blob already exists with the specified name, it will be overwritten. After obtaining a reference to the CloudBlob object, it’s just a matter of calling the appropriate method to upload the blob. In this case, I’ll use the UploadFromStream method since the file is coming from the ASP.NET Upload control as a stream; however, there are other methods depending the environment and usage, such as UploadFile, which uses the path of a physical file. All of the upload and download methods also have asynchronous counterparts.

One thing to note here is that the container names must be lowercase. If trying a name with capitalization, you will receive a rather cryptic and uninformative StorageClientException with the message “One of the request inputs is out of range.” Further, the InnerException will a WebException with the message “The remote server returned an error: (400) Bad Request.

Implementing Table Storage

Of the three storage types, Azure Table Storage requires the most setup. The first thing necessary is to create a model for the data that will be stored in the table.

Collapse
public class MetaData : TableServiceEntity
{
    public MetaData()
    {
        PartitionKey = "MetaData";
        RowKey = "Not Set";
    }

    public string Description { get; set; }
    public DateTime Date { get; set; }
    public string ImageURL { get; set; }
}

For this demonstration, the model is very simple, but, most importantly, it derives from TableServiceEntity which tells Azure the class represents a table entity. Although Azure Table Storage is not a relational database, there must be some mechanism to uniquely identify the rows that are stored in a table. The PartitionKey and RowKey properties from the TableServiceEntity class are used for this purpose. The PartitionKey itself is used to partition the table data across multiple storage nodes in the virtual environment, and, although an application can use one partition for all table data, it may not be the best solution for scalability and performance.

Windows Azure Table Storage is based on WCF Data Services (formerly, ADO.NET Data Services), so there needs to be some context for the table. The TableServiceContext class represents this, so I’ll derive a class from it.

Collapse
public class MetaDataContext : TableServiceContext
{
    private const string TABLE_NAME = "MetaData";

    public MetaDataContext(string baseAddress, StorageCredentials credentials)
        : base(baseAddress, credentials)
    {
        CloudTableClient.CreateTablesFromModel(typeof(MetaDataContext),
                                               baseAddress, credentials);
    }
}

Within the constructor, I’ll make sure the table has also been constructed, so it will be available when necessary. This could, of course, also be done in the RoleEntryPoint OnStart method if the table may be used in multiple classes.

Collapse
public void Add(MetaData data)
{
    // RowKey can't have / so replace it
    data.RowKey = data.RowKey.Replace("/", "_");
    AddObject(ENTITY_NAME, data);
    SaveChanges();
}

Adding to the table should be very familiar to anyone who has worked with LINQ to SQL or Entity Framework. You add the object to the data context, then save all the changes. Note here the RowKey naming. Since I’m using the date for the filename, I need to make a slight modification since RowKey can’t contain “/” characters.

Collapse
public IQueryable<MetaData> MetaData
{
    get { return CreateQuery<MetaData>(ENTITY_NAME); }
}

Getting to the contents of the table is a matter of creating a DataServiceQuery for the model and specifying the EntitySet you are interested in. From there, you can use LINQ to access a particular item.

Collapse
public MetaData GetMetaData(string key)
{
    return (from e in Context.MetaData
            where e.RowKey == key && e.PartitionKey == "MetaData"
            select e).SingleOrDefault();
}

Implementing Queue Storage

Queue storage is probably the easiest part to implement. Unlike Table storage, there is no need to setup a model and context, and unlike Blob storage, there is no need to be concerned with blocks and pages. Queue storage is only meant to store small messages, 8 KB or less. Adding a message to a Queue follows the same pattern as the other storage mechanisms. First, get a reference to the Queue, creating it if necessary, then add the message.

Collapse
public void Add(CloudQueueMessage msg)
{
    Client.GetQueueReference(QUEUE_NAME).CreateIfNotExist();

    Client.GetQueueReference(QUEUE_NAME).AddMessage(msg);
}

Retrieving a message from the Queue is just as easy. First, check if the Queue exists, then also make sure a message exists on the Queue before attempting to retrieve it.

Collapse
public CloudQueueMessage GetNextMessage()
{
    CloudQueueMessage msg = null;

    if(Client.GetQueueReference(QUEUE_NAME) != null)
    {
        if(Client.GetQueueReference(QUEUE_NAME).PeekMessage() != null)
        {
            msg = Client.GetQueueReference(QUEUE_NAME)
                .GetMessage();
        }
    }

    return msg;
}

Worker Role

Now we can finally get to the Worker Role. To demonstrate how a Worker Role can be incorporated into a project, I’ll use it to add a watermark to the images that have been uploaded. The Queue that was previously created will be used to notify this Worker Role when it needs to process an image and which one to process.

Just as with the Web Role, the OnStart method is used to setup and configure the environment. Worker Roles have the additional method, Run, which simply creates a loop and continues indefinitely. It’s somewhat odd to not have an exit condition; instead, when Stop is called for this role, it forcibly terminates the loop, which may cause issues for any code running in it.

Collapse
public override void Run()
{
    // This is a sample worker implementation. Replace with your logic.
    Trace.WriteLine("AzureStorageWorker entry point called", "Information");

    while(true)
    {
        PhotoProcessing.Run();

        Thread.Sleep(10000);
        Trace.WriteLine("Working", "Information");
    }
}

You can view the sample code for this article to see the details of PhotoProcessing.Run. It simply gets the blob indicated in the QueueMessage, adds a watermark, and updates the Blob storage.

Putting it all Together

Now that everything has been implemented, it’s just a matter of putting it all together. Using the Click event for the Upload button on the ASPX page, I’ll get the file that is being uploaded and the other pertinent details. The first step is to upload the blob so we can get the URI that points to it and add it to Table storage along with the description and date. The final step is adding a message to the Queue to trigger the worker process.

Collapse
protected void OnUpload(object sender, EventArgs e)
{
    if(FileUpload.HasFile)
    {
        DateTime dt = DateTime.Parse(Date.Text);
        string fileName = string.Format("{0}_{1}",
           dt.ToString("yyyy/MM/dd"), FileUpload.FileName);

        // Upload the blob
        Storage.Blob blobStorage = new Storage.Blob();
        string blobURI =
          blobStorage.PutBlob(FileUpload.PostedFile.InputStream, fileName);


        // Add entry to table
        Storage.Table tableStorage = new Storage.Table();

        tableStorage.Add(new Storage.MetaData
            {
                Description = Description.Text,
                Date = dt,
                ImageURL = blobURI,
                RowKey = fileName
            }
        );

        // Add message to queue
        Storage.Queue queueStorage = new Storage.Queue();
        queueStorage.Add(new CloudQueueMessage(blobURI + "$" + fileName));

        // Reset fields
        Description.Text = "";
        Date.Text = "";
    }
}

As I said, the UI is very simple, with the focus being on the underlying processes for Azure Storage.

Conclusion

Hopefully, this article has given you an overview of what Windows Azure Storage is and how it can be used. There is, of course, much more that can be covered on this topic, that may be covered in follow-up articles. However, here are some resources that can provide you with additional information and insight about Windows Azure and Windows Azure Storage.

Points of Interest

Names for Tables, Blob containers, and Queues seem to have a mixture of support for uppercase names. It would seem the best approach is to always use lowercase.

Posted in Azure | Tagged: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a Comment »

Install Windows Updates using Windows PowerShell

Posted by Alin D on January 9, 2010

In most domains Windows Update are controlled by Group Policy and Windows Server Update Services (WSUS). For client computers, it`s common to download and install updates automatically. For servers, we want to control the installation of Windows Updates inside a scheduled maintenance window, and hence the Windows Update settings are configured not to automatically install updates.

If there are a few servers to manage, it won`t be that time consuming to log on to each server with Remote Desktop and do the Windows Update installations manually. In larger environments however, this won`t be an option, it must be automated in some way. While enterprise environments typically invest in a commercial product like BigFix for patch management, this might be overkill or too expensive for environments smaller than an enterprise.

To manage Windows Update in an automated way we can access the Windows Update Agent API using a COM-object called Microsoft.Update.Session. Using the New-Object cmdlet in Windows PowerShell, it`s easy to work with this COM-object. Based on this COM-object and portions of James O`Neill`s functions for managing Windows Update, I`ve written a PowerShell-script called WindowsUpdate.

This script is intended to be used to download and install Windows Updates on servers. It runs like expected when invoked from a local computer, however, invoking the script using PowerShell Remoting like I was planning turned out to be problematic: Invoke-Command -ComputerName ServerA -FilePath ‘C:WindowsUpdate.ps1′ (Download Script source)

This will return the following error message:
Exception calling “CreateUpdateDownloader” with “0″ argument(s): “Access is denied. (Exception from HRESULT: 0×80070005 E_ACCESSDENIED))”

This issue doesn`t seem to be related to PowerShell, as there are several others reporting the same problem in other languages like VBScript.

The common workaround is to schedule the script to run as a scheduled task running as SYSTEM. I`ve chosen to use this approach and to use PowerShell Remoting to invoke the scheduled task to run the script. An example:

$servers = Get-Content ‘C:ps-scriptsWindows UpdateBulkA.txt’
foreach ($server in $servers) {
Invoke-Command -Computer $server -ScriptBlock {schtasks /run /tn “PowerShell – download and install Windows Updates”}
}

To create the scheduled task I would recommend you to use Group Policy Preferences.
A few sample screenshots from my lab setup:

image

image

image

image

Although it`s possible to invoke the script on all servers in the domain at once using i.e. the Active Directory-module for PowerShell to get the server names, I would recommend to break down the installations in several bulks. This way you can control that all domain controllers doesn`t go offline at the same time and so on.

As you can see in the script it`s also possible to enable reporting to e-mail or file (HTML) to a central location, in addition to control whether the servers should reboot if required.
Planned improvements include nicer reports, Windows Update settings in the reports and if possible make the script work without having to use scheduled tasks. Suggestions for other improvements are always welcome.

Posted in Powershell | Tagged: , , , , , , , , , | Leave a Comment »