Thursday 31 December 2009

Specifying Resource Keys using Data Binding in WPF, Take 2: Introducing the ResourceKeyBinding markup extension

A few nights before Christmas, when, all through the house, not a distraction was stirring, not even a spouse, I posted about a technique that mixed up Resources and Data Binding in WPF, letting you use data binding to specify the key of the resource you wanted to use for a property. This trick helps to keep ViewModels unpolluted by Viewish things such as the URIs of images, while still retaining some control of those aspects - for example, over which Images are shown.

There was one thing I didn’t like about the technique that I showed: for every kind of Dependency Property that was to be the target of a data-bound resource key, you had to use my ResourceKeyBindingPropertyFactory to derive a new Dependency Property to hold the data binding.

Today I’m going to show a much more elegant technique: the ResourceKeyBinding markup extension. In the following snippet, I’ve modified the example I showed last time so that it now uses my new ResourceKeyBinding to data bind the keys of the resources used for the caption on the buttons:

<DataTemplate>
  <Button Command="{Binding}" Padding="2" Margin="2" Width="100" Height="100">
    <StackPanel>
      <Image HorizontalAlignment="Center"
             Width="60"
             app:ResourceKeyBindings.SourceResourceKeyBinding="{Binding Converter={StaticResource ResourceKeyConverter}, ConverterParameter=Image.{0}}"/>
      <TextBlock Text="{ext:ResourceKeyBinding Path=Name, StringFormat=Caption.{0} }" HorizontalAlignment="Center" FontWeight="Bold" Margin="0,2,0,0"/>
    </StackPanel>
  </Button>
</DataTemplate>

As you can see from line 7, you use ResourceKeyBinding in almost exactly the same way that you would use a normal Binding: not shown here are Source, RelativeSource or Element properties that work as you would expect; and, as per Binding, if all of these are omitted, the data source for the ResourceKeyBinding is the DataContext of the element. I’m also making use of the StringFormat capability of data bindings, which gets the value of the property indicated by Path and applies the given string format to it.

With this in place, the TextBlock should be given the appropriate value picked out from our amended App.xaml:

<Application.Resources>
  <BitmapImage x:Key="Image.AngryCommand" UriSource="Angry.png"/>
  <BitmapImage x:Key="Image.CoolCommand" UriSource="Cool.png"/>
  <BitmapImage x:Key="Image.HappyCommand" UriSource="Happy.png"/>

  <sys:String x:Key="Caption.Angry">Angry. Rrrr!</sys:String>
  <sys:String x:Key="Caption.Happy">Happy. Ha ha!</sys:String>
  <sys:String x:Key="Caption.Cool">Chilled out</sys:String>
</Application.Resources>

And sure enough, it is:

ResourceBindingSampleImage2

Behind the curtain

So how does it work? There are two parts to it. The first component is the markup extension itself:

public class ResourceKeyBindingExtension : MarkupExtension
{
    public override object ProvideValue(IServiceProvider serviceProvider)
    {
        var resourceKeyBinding = new Binding()
        {
            BindsDirectlyToSource = BindsDirectlyToSource,
            Mode = BindingMode.OneWay,
            Path = Path,
            XPath = XPath,
        };

        //Binding throws an InvalidOperationException if we try setting all three
        // of the following properties simultaneously: thus make sure we only set one
        if (ElementName != null)
        {
            resourceKeyBinding.ElementName = ElementName;
        }
        else if (RelativeSource != null)
        {
            resourceKeyBinding.RelativeSource = RelativeSource;
        }
        else if (Source != null)
        {
            resourceKeyBinding.Source = Source;
        }

        var targetElementBinding = new Binding();
        targetElementBinding.RelativeSource = new RelativeSource()
        {
            Mode = RelativeSourceMode.Self
        };

        var multiBinding = new MultiBinding();
        multiBinding.Bindings.Add(targetElementBinding);
        multiBinding.Bindings.Add(resourceKeyBinding);

        // If we set the Converter on resourceKeyBinding then, for some reason,
        // MultiBinding wants it to produce a value matching the Target Type of the MultiBinding
        // When it doesn't, it throws a wobbly and passes DependencyProperty.UnsetValue through
        // to our MultiBinding ValueConverter. To circumvent this, we do the value conversion ourselves.
        // See http://social.msdn.microsoft.com/forums/en-US/wpf/thread/af4a19b4-6617-4a25-9a61-ee47f4b67e3b
        multiBinding.Converter = new ResourceKeyToResourceConverter()
        {
            ResourceKeyConverter = Converter,
            ConverterParameter = ConverterParameter,
            StringFormat = StringFormat,
        };

        return multiBinding.ProvideValue(serviceProvider);
    }

    [DefaultValue("")]
    public PropertyPath Path { get; set; }

    // [snipped rather uninteresting declarations for all the other properties]
}

Under the covers, ResourceKeyBindingExtension is being rather cunning. It constructs a MultiBinding with two child bindings: one binding is used to get hold of the resource key: this is initialised with the parameters that ResourceKeyBinding is given – the property path and data source, for example. The other child binding is set up with a RelativeSource mode of Self so that it grabs a reference to the ultimate target element (in the case of the example above, the TextBlock).

Every MultiBinding needs a converter, and we configure ours in line 45. The job of this converter is to use the resource key obtained by the second child binding to find the appropriate resource in the pool of resources available to the target element obtained by the first child binding – FrameworkElement.TryFindResource does the heavy lifting for us here:

class ResourceKeyToResourceConverter : IMultiValueConverter
{
    // expects the target object as the first parameter, and the resource key as the second
    public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture)
    {
        if (values.Length < 2)
        {
            return null;
        }

        var element = values[0] as FrameworkElement;
        var resourceKey = values[1];
        if (ResourceKeyConverter != null)
        {
            resourceKey = ResourceKeyConverter.Convert(resourceKey, targetType, ConverterParameter, culture);
        }
        else if (StringFormat != null && resourceKey is string)
        {
            resourceKey = string.Format(StringFormat, resourceKey);
        }

        var resource = element.TryFindResource(resourceKey);

        return resource;
    }

    public object[] ConvertBack(object value, Type[] targetTypes, object parameter, System.Globalization.CultureInfo culture)
    {
        throw new NotImplementedException();
    }

    public IValueConverter ResourceKeyConverter { get; set; }

    public object ConverterParameter { get;set;}

    public string StringFormat { get; set; }
}

You’ll notice that if ResourceKeyBinding is given a Converter or a StringFormat it doesn’t give these to resourceKeyBinding as you might expect. Instead it passes them on to the ResourceKeyToResourceConverter, which handles the conversion or string formatting itself. I’ve not done it this way just for fun: I found out the hard way that if you include a converter in any of the child Bindings of a MultiBinding, then WPF, rather unreasonably in my opinion, expects that converter to produce a result that is of the same Type as the property that the MultiBinding is targeting. If the Converter on the child Binding produces a result of some other type, then the MultiBinding passes DependencyProperty.UnsetValue to its converter rather than that result. There’s a forum thread discussing this behaviour but no real answer as to whether this is by design or a bug.

Watch this bug don’t getcha

One other gotcha with custom markup extensions, this one definitely a bug in Visual Studio. If you define a custom markup extension, and then, in Xaml that is part of the same assembly, you set one of the properties of that markup extension using a StaticResource you’ll get a compile-time error similar to:

Unknown property '***' for type 'MS.Internal.Markup.MarkupExtensionParser+UnknownMarkupExtension' encountered while parsing a Markup Extension.

The workaround, as Clint discovered, is either to put your markup extension in a separate assembly (which is what I’ve done) or use Property Element syntax for the markup extension in XAML.

Try it yourself

I’ve updated the code on the MSDN Code Gallery page – go see if for yourself.

Wednesday 23 December 2009

Specifying Resource Keys using Data Binding in WPF

Imagine you’re wanting to show a list of things in an ItemsControl, with each item having a different image. Using WPF’s implicit Data Templating support, and giving each item Type its own Data Template is one way of implementing this: but if there are many items, and the image is the only thing that’s different in each case, and the Data Template is of any complexity, your code will soon start to suffer DRY rot.

You could just pinch your nose and put the image in your ViewModel so that it can be databound in the normal way. Of course, images should really live in a ResourceDictionary: but how can you pick a resource out of a ResourceDictionary using data binding? Let me show you.

The example: AutoTherapist

Here’s what I want to build:ResourceBindingSampleImage

I’ve got a very simple ViewModel with a property exposing a list of the commands that sit behind the buttons in my Window:

public class WindowViewModel
{
    public IList<ICommand> Commands
    {
        get
        {
            return new ICommand[] { new AngryCommand(), new HappyCommand(), new CoolCommand() };
        }
    }
}

And here’s the relevant part of the View:

<ItemsControl Grid.Row="1" ItemsSource="{Binding Commands}">
    <ItemsControl.ItemTemplate>
      <DataTemplate>
        <Button Command="{Binding}" Padding="2" Margin="2" Width="100" Height="100">
          <StackPanel>
            <Image HorizontalAlignment="Center"
                   Width="60"
                   app:ResourceKeyBindings.SourceResourceKeyBinding="{Binding Converter={StaticResource ResourceKeyConverter}, ConverterParameter=Image.{0}}"/>
            <TextBlock Text="{Binding Name}" HorizontalAlignment="Center" FontWeight="Bold" Margin="0,2,0,0"/>
          </StackPanel>
        </Button>
      </DataTemplate>
    </ItemsControl.ItemTemplate>
    <ItemsControl.ItemsPanel>
      <ItemsPanelTemplate>
        <StackPanel Orientation="Horizontal"/>
      </ItemsPanelTemplate>
    </ItemsControl.ItemsPanel>
</ItemsControl>

The ItemsControl is bound to the Commands property on my ViewModel. Each command is rendered with the same DataTemplate. The magic happens in line 8 where I’m using the attached property ResourceKeyBindings.SourceResourceKeyBinding. This property allows me to data-bind the key of the resource I want to use for the Image.Source property. I’ll show you how that works in a minute, but first: where are the resource keys coming from?

You’ll notice that, since there’s no Path specified in the Binding, we’re binding directly to the data object - in this case, one of the commands. Then we’re using a converter to turn that object into the appropriate key. Here’s the code for the converter:

class TypeToResourceKeyConverter : IValueConverter
{
    public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
    {
        var formatString = parameter as string;
        var type = value.GetType();
        var typeName = type.Name;

        var result = string.Format(formatString, typeName);

        return result;
    }

    public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
    {
        throw new NotImplementedException();
    }
}

What this is doing is getting the name of the Type of the data object and pushing that through the format string given as the parameter to the converter. Given the way the converter is set up in our case, this will produce resource keys like “Image.AngryCommand”, “Image.HappyCommand”, etc.

So now all we need to make the AutoTherapist work is to define those resources. Here’s App.xaml:

<Application x:Class="ResourceKeyBindingSample.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    StartupUri="Window1.xaml">
    <Application.Resources>
      <BitmapImage x:Key="Image.AngryCommand" UriSource="Angry.png"/>
      <BitmapImage x:Key="Image.CoolCommand" UriSource="Cool.png"/>
      <BitmapImage x:Key="Image.HappyCommand" UriSource="Happy.png"/>
    </Application.Resources>
</Application>

(The icons are from VistaIcons.com, by the way).

The implementation

So what does that attached property look like? It’s actually rather simple:

public static class ResourceKeyBindings
{
    public static DependencyProperty SourceResourceKeyBindingProperty = ResourceKeyBindingPropertyFactory.CreateResourceKeyBindingProperty(
        Image.SourceProperty,
        typeof(ResourceKeyBindings));

    public static void SetSourceResourceKeyBinding(DependencyObject dp, object resourceKey)
    {
        dp.SetValue(SourceResourceKeyBindingProperty, resourceKey);
    }

    public static object GetSourceResourceKeyBinding(DependencyObject dp)
    {
        return dp.GetValue(SourceResourceKeyBindingProperty);
    }
}

As you can see, I’ve factored out the magic into ResourceKeyBindingPropertyFactory. This makes it easy to create equivalent properties for any other target property (in the download, for example, I’ve made a StyleResourceKeyBinding property for binding to FrameworkElement.Style). ResourceKeyBindingPropertyFactory looks like this:

public static class ResourceKeyBindingPropertyFactory
{
     public static DependencyProperty CreateResourceKeyBindingProperty(DependencyProperty boundProperty, Type ownerClass)
     {
         var property = DependencyProperty.RegisterAttached(
             boundProperty.Name + "ResourceKeyBinding",
             typeof(object),
             ownerClass,
             new PropertyMetadata(null, (dp, e) =>
             {
                 var element = dp as FrameworkElement;
                 if (element == null)
                 {
                     return;
                 }

                 element.SetResourceReference(boundProperty, e.NewValue);
             }));

         return property;
     }
}

All we do here is register an attached property and set it up with a PropertyChanged handler: the handler simply takes the new value of the property – which in our case will be the resource key – and passes it to SetResourceReference along with the target property. SetResourceReference is the programmatic equivalent of using DynamicResource in XAML – it looks up the appropriate resource (from the current element’s ResourceDictionary or one of its ancestors’) and assigns it to the given property.

So there you have it: data binding for Resource Keys. Full source code for this sample is available from the MSDN Code Gallery.

Not for Silverlight

Unfortunately I don’t think it is possible to port this to Silverlight without a lot of work because Silverlight has no support for Dynamic Resources. From a cursory look, I don’t think there is even runtime support built in for finding a Resource in the Resource Dictionary chain up the ancestor tree of an element as there is in WPF: it looks like the Silverlight parser is responsible for doing this when required by StaticResource references. I would be delighted if someone could show me otherwise

Friday 20 November 2009

Highlights of ‘Microsoft Project Code Name “M”: The Data and Modeling Language’

I’m having great fun watching the Microsoft PDC 2009 session videos, and blogging the highlights for future reference. In case you want to jump into the video, I’ve included some time-checks in brackets (0:00). Read the rest of the series here.

Note that there is currently a problem with the encoding of the video linked to below which means you can’t seek to an arbitrary offset. Hopefully that will be fixed soon.

In a 45 minute lunch-time talk, Don Box (once he’d finished munching his lunch) and Jeff Pinkston gave an overview of the “M” language (quotes, or air-quotes if you’re talking, are apparently very important to the PDC presenters invoking the name “M”, because it is only a code-name, and the lawyers get angsty if people aren’t reminded of that).

“M” is a language for data (1:26). Since last year, Microsoft have combined the three components MGraph, MGrammar and MSchema into this one language “M” that can be used for data interchange, data parsing and data schema definition. It will be released under the Open Specification Promise (OSP) which effectively means that anybody can implement it freely.

As Don said, “M lives at the intersection of text avenue and data street” (2:56). This means that not only is M a language for working with data, but M itself can be represented in text or in the database.

At PDC last year, Microsoft demonstrated “M” being used as an abstraction layer above T-SQL, and also as a language for defining Grammar specifications. This year, they are introducing “M” as a way of writing Entity Framework data models (4:00).

Getting into the detail, Don started by describing how M can be used to write parsers: functions that turn text into structured data (5:02). You use M to define your language in terms of Token rules and Syntax rules (6:02): Token rules describe a patterns that sequences of characters should match, and Syntaxes are sequences of tokens. Apparently there are a few innovations in the “M” language: it uses a GLR parser and it can have embedded token sets to allow, for example, XML and JSON to be used in the same language (6:40) [sorry I can’t make it clearer than that - Don rushed through this part].

Then came a demo (8:15) in which Don and Jeff showed how a language can be interactively defined in Intellipad, with Intellipad even doing real-time syntax checking on any sample input text you provide it with.

Intellipad example

Notice from the screenshot how you can define pattern variables in your syntaxes (and also in tokens) and then use these in expressions to structure the output of your language: in syntax Tweet for example, the token matching HashTag is assigned to variable h, and then the output of the syntax (the part following the =>)  is output under the name “Hash”. Integration with Visual studio has also been improved to make it easy to load a language, use it to parse some input, and then work with the structured output in your code.

Next Don talked a bit about typing in “M” (22:10). “M” uses structural typing. Think of this as being a bit like duck typing: if two types have members with the same name and same types then they are equivalent. M has a number of intrinsic types (logical, number, text, etc.) and a number of “data compositors” – ways of combining types together – like collections, sequences and records.

Don followed this up with a demo of “Quadrant” (there go the air-quotes again) (25:36), showing how this can be used to deploy schemas defined in “M” to the database. “M” is tightly integrated into “Quadrant: you can type in “M” expressions and it will execute them directly against your database, showing the results in pretty tables (34:50).

Don finished off by talking about how M can be used for defining EDM models (34:19): many people prefer describing models in text rather than in a designer, especially since performance in the EDM designer suffers with large models.

Scott Hanselman interviews James Bach

To enliven my journey to work this morning I listened to Scott Hanselman’s interview with James Bach, an international consultant on software testing. They were talking about James' new book Voyage of a Buccaneer-Scholar.

James has led an interesting life. He was kicked out of home when he was 14, so moved into a motel room with his computer. There he taught himself Assembly language programming. When he was 16, he dropped out of high school and started a career as a video game programmer.

Listen to the interview to find out how he made the journey to tester, speaker, writer, and proponent of Exploratory testing.

Thursday 19 November 2009

Highlights of “Data Programming and Modeling for the Microsoft .NET Developer”

I’m having great fun watching the Microsoft PDC 2009 session videos, and blogging the highlights for future reference. In case you want to jump into the video, I’ve included some time-checks in brackets (0:00). Read the rest of the series here.

Don Box and Chris Anderson gave a very watchable presentation, Data Programming and Modeling for the Microsoft .NET Developer. This is an overview of how we .Net developers have done data access in the past, and how we will be doing it the future.

Chris Anderson started with a reminder of the dark ages of data access in .Net, SqlConnection and IDataReader (3:40). Then he showed how an Entity Framework data model could be layered on top of the database. Entity Framework provides something called an EntityConnection which works like SqlConnection, but in terms of the entities in your model, not data base tables. You can write queries in something called EntitySql, which allows you to “.” your way though object relationships without using joins (see 7:48). Most often though, Entities are accessed using the ObjectContext, which gives strongly-typed access to the entities and permits LINQ queries over them (11:18).

Attention then turned to the way we define our databases and models. Traditionally we would start with the database, and build an Entity Framework model on top of it. As from Entity Framework v4 we will be able to define the model first, and have Entity Framework generate the database from it (14:20). But we can go further than this. Using a CTP of an API that will be released after .Net 4, it’s possible to define a model entirely in code using Plain-Old-CLR-Objects (POCO), and then generate a database from this (19:44). But which approach is best. Chris provided this helpful slide:

WhichApproachToModelling

Don Box then took over (33:40) to talk about the OData Protocol. This is a new name for the protocol used by ADO.Net Data Services (formerly known as Astoria). It is based on the Atom publishing format and it provides Rest-based access to data. As well as supporting querying (sorting, filtering, projection, etc.) it also supports updates to the data.

OData picture

Don demoed how Sharepoint 2010 supports this format (37:35). He showed how it makes use of the Entity Data model to provide meta-data about the structure of the data (39:00). Excel 2010 has support for querying data in this format (39:40). Naturally .Net applications can query this kind of data (40:45), but there is also an API that makes it easy to write services that provide data in this format (45:00). According to Don, "OData is the new ODBC”!

In the last ten minutes, Don talked about the connection that all this has with the “M” language – how M can be used to create the Entity Model for example.

Future Directions for C# and Visual Basic

I’m having great fun watching the Microsoft PDC 2009 session videos, and blogging the highlights for future reference. In case you want to jump into the video, I’ve included some time-checks in brackets (0:00). Read the rest of the series here.

Luca Bolognese opened his session Future Directions for C# and Visual Basic by announcing that the strategy for future development of C# and Visual Basic is one of co-evolution. As he said, the previous strategy where each language was independent, and one would get new features that the other didn’t was “maximising unsatisfaction”. Now there is one compiler team responsible for both languages, and any big new features added to one will get added to the other. VS 2010 already brings the two languages much closer together, and this will increase in the future.

In the first half of the presentation, Luca talked about the three big trends in current languages: Declarative, Dynamic and Concurrent.LanguageTrendsIn a demo (starting at 6:20 in the video) Luca created a CSV file parser. He showed (12:08) how writing the code in declarative style (using LINQ) not only makes it easier to read, it also makes it easier to parallelize. As simple as adding AsParallel() to the middle of the query in fact (15:18). The Task Parallel Library (part of .Net 4) makes it possible to parallelize imperative code (for loops, etc.) but with much greater potential for bugs like unsynchronized collection access (16:20). Luca then went on to demonstrate the dynamic language features in C# 4, and the DynamicObject base class (24:38).

Then he turned to the future, but not without a disclaimer that there were no promises of anything he talked about actually shipping. It seems, however, that Microsoft are pretty firmly committed to the first item he mentioned: rewriting of the C# compiler in C# and the VB.Net compiler in VB.Net, and opening up the black box so that we can make use of the lexer, parser, code generator, etc. From what he said later on, I gather that most of the team are currently involved in completing this work.

Luca demonstrated (36:00) how in just 100 lines of code he could use the new APIs to create a refactoring that would re-order the parameters of a method (and take care of the call site). Previously, anyone wanting to do something like this would have first needed to write a lexer and a parser, but that would be provided for us.

RefactoringExample

The last demonstration was of something Luca called “resumable methods”. These appear to be something akin to async workflows in F#. By prefixing an expression with the “yield” statement, Luca indicated to the compiler that he wanted the method to be called asynchronously (50:30). The compiler then takes care of rewriting the code so that execution resumes at the next statement once the asynchronous execution of the first statement completes. The benefit of this is that the thread can be used for something else meanwhile. By getting the compiler to do the rewriting we can avoid a whole lot of ugly code (see the video at 46:24). ResumableMethodsImplementationOne other thing that Luca mentioned is being considered by the team is support for immutability (52:41) . He said that they had considered 4 or 5 different designs but hadn’t yet settled on one that was exactly right. Part of the problem is that so much of the language is affected: not just the types themselves, but parameters, fields, etc.

If you want more on this, read Sasha Goldstein’s account of Luca’s talk.

Wednesday 18 November 2009

PDC Day 2 Keynote round-up

Reading the Twitter stream before the Keynote of PDC Day 2, the general consensus was that they keynote of Day 1 was rather dull. Much like last year, it seems that Day 1 was for the Suits, whereas Day 2 for the Geeks.

And were the geeks thrilled today. Steven Sinofsky, after demonstrating a whole lot of luscious hardware (including a Server-replacement laptop, and another laptop so thin that it seemed to disappear when turned edge-on) announced that all fully-paid up attendees would be given a free Acer laptop. The Gu then went on to announce the Beta of Silverlight 4, and Kurt Del Bene flicked the switch on the Office 2010 Beta.

These announcements have been covered in detail elsewhere, so I’ll leave you with the best links I’ve found.

Thursday 12 November 2009

PDC 2009 @ Home

Can you believe it? A year flown by already since I jetted off to LA, reclined to the soothing sound of Scott Gu's Keynote, and all-round indulged in a general geek-out at Microsoft PDC 2008! And already Microsoft PDC 2009 is upon us.

Windows 7, which we gawped at for the first time twelve months ago, is now gracing our desktops; Windows Azure is almost ready to go live; and after a good 52 weeks of head-scratching developers are just beginning to work out what Oslo is and isn't good for.

What with our baby being born and Bankers busting our economy, I'm not going to be present in person at the PDC this year. But the Directors at Paragon have very kindly granted us all a couple of days to be present in spirit by taking advantage of the session videos that Microsoft generously post online within 24 hours of the event.

So which sessions will I be watching?

Top of my list are the PDC specialties, the sessions usually flagged up with the magic word "Future" in their title. What developer doesn't go to PDC with an ear greedy for announcements of shiny new features for their favourite platform?

The other thing PDC is famous for are “deep-dives”: talks by the architects and developers of the various technologies on show, laying bare the inner workings of their creations. This year I’ll probably focus on WPF and Silverlight.

As well as watching these sessions, I hope to find some time over the next week for blogging about my discoveries. So why don’t you follow along?

Monday 2 November 2009

A Fiddler plug-in for inspecting WCF Binary encoded messages

If ever you're needing to debug the interaction between a Web Service and its clients, Microsoft’s Fiddler is the tool to use - this includes WCF Services so long as they're using a HTTP transport. The only thing Fiddler won't do is decode messages that are sent using WCF's proprietary Binary encoding - until today, that is: at lunch time, I took advantage of Fiddler's neat extensibility mechanism and created a rough-and-ready Inspector that will translate binary messages from gobbledegook to plain xml for your debugging pleasure.

You can download the plug-in and source from MSDN Code Gallery. To use it, just drop the plug-in  in the Inspectors folder of your Fiddler installation. Once you've reloaded Fiddler, switch to the Inspectors tab and look for WCF Binary.WCFBinaryFiddlerPlugin

Implementation Notes

  • There’s a very helpful page on the Fiddler site which tells you how to build Inspectors in .Net.
  • Fiddler gives each Inspector the raw bytes of each message, and it can do with it what it likes. Here’s how I decode a WCF Binary encoded message:
using System;
using System.Runtime.Serialization;
using System.ServiceModel.Channels;

...

private static readonly BufferManager _bufferManager = BufferManager.CreateBufferManager(int.MaxValue, int.MaxValue);

...

private string GetWcfBinaryMessageAsText(byte[] encodedMessage)
{
    var bindingElement = new BinaryMessageEncodingBindingElement();
    var factory = bindingElement.CreateMessageEncoderFactory();
    var message = factory.Encoder.ReadMessage(new ArraySegment<byte>(encodedMessage), _bufferManager);
    return message.ToString();
}

Thursday 22 October 2009

Getting the MethodInfo of a generic method using Lambda expressions

Getting hold of the MethodInfo of a generic method via Reflection (so you can invoke it dynamically, for example) can be a bit of a pain. So this afternoon I formulated a pain-killer, SymbolExtensions.GetMethodInfo. It’s not fussy: it works for non-generic methods too. You use it like this:

internal class SymbolExtensionsTests
{
    [Test]
    public void GetMethodInfo_should_return_method_info()
    {
        var methodInfo = SymbolExtensions.GetMethodInfo<TestClass>(c => c.AMethod());
        methodInfo.Name.ShouldEqual("AMethod");
    }

    [Test]
    public void GetMethodInfo_should_return_method_info_for_generic_method()
    {
        var methodInfo = SymbolExtensions.GetMethodInfo<TestClass>(c => c.AGenericMethod(default(int)));

        methodInfo.Name.ShouldEqual("AGenericMethod");
        methodInfo.GetParameters().First().ParameterType.ShouldEqual(typeof(int));
    }

    [Test]
    public void GetMethodInfo_should_return_method_info_for_static_method_on_static_class()
    {
        var methodInfo = SymbolExtensions.GetMethodInfo(() => StaticTestClass.StaticTestMethod());

        methodInfo.Name.ShouldEqual("StaticTestMethod");
        methodInfo.IsStatic.ShouldBeTrue();
    }
}

The active ingredient, as you can see, is Lambda expressions:

using System.Linq;
using System.Linq.Expressions;
using System.Reflection;
using System;

public static class SymbolExtensions
{
    /// <summary>
    /// Given a lambda expression that calls a method, returns the method info.
    /// </summary>
    /// <typeparam name="T"></typeparam>
    /// <param name="expression">The expression.</param>
    /// <returns></returns>
    public static MethodInfo GetMethodInfo(Expression<Action> expression)
    {
        return GetMethodInfo((LambdaExpression)expression);
    }

    /// <summary>
    /// Given a lambda expression that calls a method, returns the method info.
    /// </summary>
    /// <typeparam name="T"></typeparam>
    /// <param name="expression">The expression.</param>
    /// <returns></returns>
    public static MethodInfo GetMethodInfo<T>(Expression<Action<T>> expression)
    {
        return GetMethodInfo((LambdaExpression)expression);
    }

    /// <summary>
    /// Given a lambda expression that calls a method, returns the method info.
    /// </summary>
    /// <typeparam name="T"></typeparam>
    /// <param name="expression">The expression.</param>
    /// <returns></returns>
    public static MethodInfo GetMethodInfo<T, TResult>(Expression<Func<T, TResult>> expression)
    {
        return GetMethodInfo((LambdaExpression)expression);
    }

    /// <summary>
    /// Given a lambda expression that calls a method, returns the method info.
    /// </summary>
    /// <param name="expression">The expression.</param>
    /// <returns></returns>
    public static MethodInfo GetMethodInfo(LambdaExpression expression)
    {
        MethodCallExpression outermostExpression = expression.Body as MethodCallExpression;

        if (outermostExpression == null)
        {
            throw new ArgumentException("Invalid Expression. Expression should consist of a Method call only.");
        }

        return outermostExpression.Method;
    }
}

Thursday 15 October 2009

Flattery (almost) works

I checked my email using my smart-phone the other night, and found this:

Anonymous has left a new comment on your post "I'll be speaking at PDC 2008...":

Masters thesisI have been through the whole content of this blog which is very informative and knowledgeable stuff, So i would like to visit again.

How nice. I'm always chuffed when somebody takes the time to leave an appreciative comment on my blog. And this one stoked my ego. At a glance it seemed that Mr (or Ms) Anonymous was likening the quality and content of my work to a Masters thesis; though the wording of the message was a little strange ... perhaps English was not their native language?

Ever polite, the next time I went to my PC I fired up the browser and found the appropriate page on my blog with the intention of thanking Mr (or Ms) Anonymous for their kind words and inviting them to visit as often as they liked.

But the browser revealed what my phone's mail client had concealed. The words "Masters Thesis" were underlined in blue - a hyperlink to a site whose page rank I will not promote by repeating the link. Cue sound of hissing steam as ego is rapidly quenched.

Out of curiosity at the wares being peddled by these flattersome spammers, I followed the link, and found offers of written-to-order essays or Masters Thesis - guaranteed native English, and plagiarism free. The kind-hearted scholars at the site will also undertake to complete assignments and exams for online courses and can guarantee to ace them all - for a small consideration of $2000 plus $300 per exam.

Needless to say, I put my blog's trash-can button to good use.

Saturday 10 October 2009

Practical LINQ #4: Finding a descendant in a tree

Trees are everywhere. I don’t mean the green, woody variety. I mean the peculiar ones invented by programmers that have their root somewhere in the clouds and branch downwards. As inevitably as apples fall from trees on the heads of meditating physicists, so coders find themselves writing code to traverse tree structures.

In an earlier post we explored the tree created by the UI Automation API representing all the widgets on the Windows desktop. One very common tree-traversal operation is finding a particular child several nodes down from a root starting point. I gave the example of locating the Paint.Net drawing canvas within the main window of the application:

// find the Paint.Net drawing Canvas
 var canvas = mainWindow.FindDescendentByIdPath(new[] {
     "appWorkspace",
       "workspacePanel",
         "DocumentView",
           "panel",
             "surfaceBox" });

What we have here is an extension method, FindDescendentByIdPath, that starts from a parent element and works its way down through the tree, at each level picking out the child with the given Automation Id. In my last post, I skipped over my implementation of this method, but it deserves a closer look because of the way it uses Functional techniques and LINQ to traverse the hierarchy.

So here it is:

public static AutomationElement FindDescendentByIdPath(this AutomationElement element, IEnumerable<string> idPath)
{
    var conditionPath = CreateConditionPathForPropertyValues(AutomationElement.AutomationIdProperty, idPath.Cast<object>());

    return FindDescendentByConditionPath(element, conditionPath);
}

public static IEnumerable<Condition> CreateConditionPathForPropertyValues(AutomationProperty property, IEnumerable<object> values)
{
    var conditions = values.Select(value => new PropertyCondition(property, value));

    return conditions.Cast<Condition>();
}

public static AutomationElement FindDescendentByConditionPath(this AutomationElement element, IEnumerable<Condition> conditionPath)
{
    if (!conditionPath.Any())
    {
        return element;
    }

    var result = conditionPath.Aggregate(
        element,
        (parentElement, nextCondition) => parentElement == null
                                              ? null
                                              : parentElement.FindChildByCondition(nextCondition));

    return result;
}

The first thing we do is convert the sequence of Automation Id strings into a sequence of Conditions (actually PropertyConditions) that the UI Automation API can use to pick out children of parent elements. This is handled for us by the CreateConditionPathForPropertyValues method in line 8.

The actual method of interest is FindDescendentByConditionPath, starting in line 15. Here we put the Enumerable.Aggregate method to a slightly unconventional use. Considered in the abstract, the Aggregate method takes elements from a sequence one by one and performs an function (of your choice) to combine the element with the previous result; the very first element is combined with a seed value.

In this case the seed value is the parent element at the top of tree, the sequence of elements is the list of conditions that we use to pick out the child at each level of the tree. And we provide a lambda function that, each time it is called, takes the element it found in the previous iteration, together with the next condition from the list and uses an extension method that I demonstrated in the earlier blog post, FindChildByCondition, to find the appropriate child.

I’ve found this method a great help when monkeying around in UI Automation trees. If you think you might, look for it in the the source code from last time.

Friday 2 October 2009

Work around for WPF Bug: Drag and Drop does not work when executing a WPF application in a non-default AppDomain

Today's task was implementing Drag/Drop to move items into folders. My estimate of yesterday was 6 hours to get the job done. That should have been plenty: then I discovered the bug.

We're creating an Excel Addin using Addin-Express (which is very like VSTO). All the UI for our add-in is built with WPF (naturally). When Addin Express hosts Add-ins for Excel, it isolates them each in their own App-Domain. Which is why today I came to add myself to the list of people who have reproduced this issue on the Connect website: "Drag and Drop does not work when executing a WPF application in a non-default AppDomain"

Googling around the problem, I found that Tao Wen had also encountered the problem, and had managed to find something of a work-around.

Having Reflectored around the WPF source-code a little, I can see that when WPF is hosted in a non-default App Domain, the check that the HwndSource class (which manages the Win32 Window object on behalf of the WPF Window class) does to ensure the code has the UnmanagedCode permission fails, so Windows do not register themselves as OLE Drop Targets (see the last few lines of the HwndSource.Initialize method)

Tao Wen's solution is to use reflection to directly call the internal DragDrop.RegisterDropTarget method. What his solution doesn't take into account is that, when the Window is closed it must unregister itself as a drop target.

Fortunately, by incrementing the _registeredDropTargetCount field in the HwndSource class we can ensure that HwndSource will itself call DropDrop.RevokeDropTarget when it's disposed (see about half-way down the HwndSource.Dispose method).

For anybody else who is blocked by the same issue, I've created a little class that will implement the hack on a Window class. Be warned: it uses evil reflection code to access private members of WPF classes, and it might break at any time. Use it at your own risk.

/// <summary>
/// Contains a helper method to enable a window as a drop target.
/// </summary>
public static class DropTargetEnabler
{
    /// <summary>
    /// Enables the window as drop target. Should only be used to work around the bug that
    /// occurs when WPF is hosted in a non-default App Domain
    /// </summary>
    /// <param name="window">The window to enable.</param>
    /// <remarks>
    /// This is a hack, so should be used with caution. It might stop working in future versions of .Net.
    /// The original wpf bug report is here: http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=422485
    /// </remarks>
    // This code was inspired by a post made by Tao Wen on the Add-in Express forum:
    // http://www.add-in-express.com/forum/read.php?FID=5&TID=2436
    public static void EnableWindowAsDropTarget(Window window)
    {
        window.Loaded += HandleWindowLoaded;
    }

    private static void HandleWindowLoaded(object sender, RoutedEventArgs e)
    {
        // See the HwndSource.Initialize method to see how it sets up a window as a Drop Target
        // also see the HwndSource.Dispose for how the window is removed as a drop target
        var window = sender as Window;

        IntPtr windowHandle = new WindowInteropHelper(window).Handle;

        // invoking RegisterDropTarget calls Ole32 RegisterDragDrop
        InvokeRegisterDropTarget(windowHandle);

        // to make sure RevokeDragDrop gets called we have to increment the private field
        // _registeredDropTargetCount on the HwndSource instance that the Window is attached to
        EnsureRevokeDropTargetGetsCalled(windowHandle);
    }

    private static void InvokeRegisterDropTarget(IntPtr windowHandle)
    {
        var registerDropTargetMethod = typeof(System.Windows.DragDrop)
            .GetMethod("RegisterDropTarget", BindingFlags.Static | BindingFlags.NonPublic);
        if (registerDropTargetMethod == null)
        {
            throw new InvalidOperationException("The EnableWindowAsDropTarget Hack no longer works!");
        }

        registerDropTargetMethod
            .Invoke(null, new object[] { windowHandle });
    }

    private static void EnsureRevokeDropTargetGetsCalled(IntPtr windowHandle)
    {
        var hwndSource = HwndSource.FromHwnd(windowHandle);
        var fieldInfo = typeof(HwndSource).GetField("_registeredDropTargetCount", BindingFlags.NonPublic | BindingFlags.Instance);
        if (fieldInfo == null)
        {
            throw new InvalidOperationException("The EnableWindowAsDropTarget hack no longer works!");
        }

        var currentValue = (int) fieldInfo.GetValue(hwndSource);
        fieldInfo.SetValue(hwndSource, currentValue + 1);
    }
}

Use it like this:

public partial class MyWindow
 {
        public MyWindow()
        {
            InitializeComponent();
            DropTargetEnabler.EnableWindowAsDropTarget(this);
        }
}

Wednesday 30 September 2009

Googling a tribute to my Grandfather

My Grandpa, Dr R.A.F. Jack, passed away a few months ago, aged 89. It was a comfort at his funeral to see how highly regarded he was in the community and beyond: as well as being the village doctor for close to half-a-century, he was a respected and well-loved elder in our church, and he was known in the medical field for his work with homeopathy.

Though he pre-dated the internet by many years, when I googled his name, I turned up several references. Admittedly, these results were somewhat drowned out by pages referring to a certain aviationary body born two years before him. But once I’d filtered those out, I found a few gems.

I discovered that Amazon stock his book, Homeopathy in General Practice, currently number 1,252,945 on their Best Seller list. I found a bibliography of his other published works. Then there was a magazine article about his work written by an appreciative patient. She records his unorthodox method of checking that she wasn’t suffering from a trapped nerve – putting a sandbag under her knee, and bouncing another on top!

And finally, in the Letters page of the British Medical Journal from June 1955 I found this:

Redundant Circulars
Dr. R. A. F. JACK (Bromsgrove) writes: My young family have
helped me to-deal with the surfeit of advertisements that daily
swell my morning post. The elder two, who by now have lost
all interest in opening them, content themselves with collecting
the stamps and bemoan the fact that so many envelopes these
days are franked. -They then pass them on to the younger two,
who, armed with a pair of scissors each, cut out patterns or any
interesting figures or illustrations that take their fancy. They
preserve all blotters for me; and all large white envelopes that
can be ungummed are put aside for subsequent painting practice
or for conversion into paper darts. The result is that my wife and
I get an extra quarter of an hour's peace each morning, so that in
one way or another we all benefit from the unremitting onslaught
of the various drug houses.

The eldest of the two stamp collectors mentioned is my Dad, now himself a GP, and semi-retired!

Friday 25 September 2009

If your WCF service is unexpectedly receiving a null parameter value, try this…

An interesting problem ate up an hour and a half of my life yesterday morning. It was all to do with a WCF service.

The parameter that I was carefully formulating on one end of the wire and passing to my service proxy was not being received by my service at the other end - instead it was getting a null value. It worked the day before - why had it stopped working overnight?

Intercepting the http traffic with the help of the wonderful Fiddler (and a useful article about Fiddlering .Net by Rick Strahl) confirmed that a serialized version of the parameter was indeed being sent down the pipe and was arriving at the server - so why wasn't my service getting it?

After much head-scratching, I thought of checking the history of our Server project in the Source code repository. That gave me the clue I needed. In the file containing the interface which defines the service's contract, I noticed that the name of the parameter on the method in question had been changed.

Now at this point I should explain that, partially inspired by this article, we have chosen to eschew the usual route of having Visual Studio create our Service proxies for us. We prefer to share the assembly containing the Service contract interfaces and Data Contract objects between the client and the server projects. This approach works very nicely most of the time. It saves us having to "Update Service Reference" every time we change the shape of the sever side, and it reduces our dependence on the magic that Visual Studio usually works on our behalf.

Where this approach does get a little bit messy is when you want to have asynchronous versions of the operations on the client side (Begin[Operation] and End[Operation] methods). Clearly we don't need these on the contract implemented by the Server, so what we end up doing is creating a copy of the Contract named something like IContractClient which contains both the synchronous and also asynchronous versions of the operations.

Now can you see where this is going?

The problem turned out to be that the name of the parameter on the client version of the contract was different to that on the server version of the contract. The message on the wire contained the client’s (incorrect) name for the parameter. So presumably, when WCF received the message, it failed to match the parameters with those expected by the method on the server, shrugged its shoulders, and passed in a null value to the method.

But the story doesn't end there. I found the client version of the contract interface, made what I thought was the necessary correction, and whilst Visual Studio was rebuilding I announced to my colleague that I'd fixed the problem. Only to find, when I tested it, that it was still broken.

More head-scratching, and Fiddlering: the old parameter name was still being passed through. Huh? Clean and Rebuild to make definitely sure the change had stuck. Still no go.

And then the final ray of light. On the client side we were using the asynchronous version of the operation, so it was the Begin[Operation] method that I'd corrected. And I'd left the synchronous version untouched.

Once that was corrected everything started working, and I could finally begin the day's work.

In conclusion, for all you googlers:

If your WCF service is receiving null values instead of the parameters you are expecting, check that the parameter names on your methods on the client version of the contract match those on the server version of the contract. And on the client side, be sure to check both the synchronous and asynchronous definitions of your methods.

Tuesday 25 August 2009

Introducing HymnSheet – free, simple song presentation software for Churches and Schools

Our church jumped into the computer age a few years back. We ditched the Overhead Projector that had served us faithfully for many years, and acquired one of its digital brethren, along with a laptop. Ah, OHPs – those were the days – amazing what they could do with just a few lights, lenses, mirrors, and on a good day, no smoke. Anyway, I volunteered to sort out some software to do what the old OHP did for us – display songs on the screen for the Sunday School scholars to sing.

Powerpoint was an obvious first candidate, but rejected after a few moments consideration: our Sunday School leader likes to be able to add choruses to the mix at a moments notice: imagine try to insert slides into a Powerpoint presentation on the fly!

Then I scoured the ‘net for more specialised alternatives – and found plenty.But I noticed two things about all of them: firstly, they cost money; and secondly, all of them did too much – we prefer not to have our attention distracted from worship by snazzy graphics and alpha-blended video, or by nursery notices popping up to inform Mrs Jones that little Johnny needs his nappy changing.

What self-respecting programmer would not see this state of affairs as a challenge? Surely it would only take a few evenings to bash out something that would work exactly the way we wanted it to?

So (a few)^10 evenings later I had a little application ready to use. And we’ve been using it successfully for the last four years. Dragging and dropping song titles into a playlist sure beats sorting through a deck of transparencies.

Ever since I started this blog, I’ve been intending to put the application and source code online in the hope that others will find it useful. Finally I’ve got round to it. It is my birthday gift to you all (28 today – I reckon I’ll start feeling grown up in about two years time!).

What you should know about HymnSheet

So here it is, imaginatively named HymnSheet: simple song presentation software, especially suitable for children’s services, and, I imagine, School assemblies and music lessons. It features:

  • Support for dual monitors (or a laptop and a projector). One monitor displays the control screen, the other the song display
  • Drag-and-drop interface for putting songs into your play-list and for reordering them on the fly
  • Ability for you to highlight the current line of the song, to assist children learning to read
  • Easy to use song editor for adding your own songs to the song library
  • Whole song displayed at once on screen, with key shortcuts for scrolling between verses and the chorus of the song.

DualMonitorSupportGet started by downloading the installer (note that you’ll need the .Net Framework installed on your computer) and reading the user guide. I’ve also made available some sample hymns and choruses (mostly of the more traditional kind) to get you started.

If you find this useful, you might like to help me persuade my wife that it was worth my while - if you’re sufficiently persuasive, she may allow me the time to add some new features:

Points of interest in the Source Code

For the more technically minded, I’ve made the source code to HymnSheet available – feel free to adapt it and add to it if you want – I’d love to hear from you if you do. Some points of interest in the source code :

  • The code is written in VB.Net.  I know, I’m sorry. Reason is, I started working on this in the days before I had broadband. At the time I only had a CD of Visual Basic 2005 Express Beta which I’d picked up at a conference. By the time I had a a proper language installed on my machine, I didn’t fancy the job of rewriting ;-). Maybe one day…
  • Songs are stored in xml format. I use an XSL transform to turn this into HTML (with CSS to style it) and then display it in a Webbrowser control (maximised to fill the whole of the display screen). This gives you complete control over the way songs are presented. In particular, you might want to change the fonts or colours – the settings for these are in the songs.css file.
  • If you’ve never taken a look at the Webbrowser control, you should – it’s cool. As well as letting you load HTML from memory, it provides a managed DOM view of the HTML document, complete with events; you can also invoke any scripts embedded in the page.
  • I use some Javascript to animate the movement between verses. From IE 7 onwards, the default security settings prevent javascript from running in a HTML document that is loaded from memory (as is the case in HymnSheet). If you host the Webbrowser control in an application and want to enable javascript, you need to set a key in the registry: [HKEY_LOCAL_MACHINE or HKEY_CURRENT_USER]\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BLOCK_LMZ_SCRIPT\[You executable name.exe], with value of 0. I do this in the installer, but I’ve also created a class with a method that will enable scripting for the current application: look for it in the source code – it’s called EnableWebBrowserScripting.
  • I’ve included the source code for the installer (see HymnSheet.wxs). If you want to build this, you’ll need to install WiX, the Windows Installer XML toolkit; you might also want to try out the excellent WixEdit for editing Wix scripts.

Tuesday 4 August 2009

An introduction to UI Automation – with spooky spirographs

A few weeks ago, I unearthed a hidden gem in the .Net framework: the UIAutomation API. UI Automation made its entrance as part of .Net 3.0, but was overshadowed by the trio of W*Fs – if they’d named it Windows Automation Foundation it might have received more love! UIAutomation provides a robust way of poking, prodding and perusing any widget shown on the Windows desktop; it even works with Silverlight. It can be used for many things, like building Screen Readers, writing automated UI tests – or for creating a digital spirit to spook your colleagues by possessing Paint.Net and sketching spirographs.

Spirograph in Paint.Net

The UI Automation framework reduces the entire contents of the Windows Desktop to a tree of AutomationElement objects. Every widget of any significance, from Windows, to Menu Items, to Text Boxes is represented in the tree. The root of this tree is the Desktop window. You get hold of it, logically enough, using the static AutomationElement.RootElement property. From there you can traverse your way down the tree to just about any element on screen using two suggestively named methods on AutomationElement, FindAll and FindFirst.

Each of these two methods takes a Condition instance as a parameter, and it uses this to pick out the elements you’re looking for. The most useful kind of condition is a PropertyCondition. AutomationElements each have a number of properties like Name, Class, AutomationId, ProcessId, etc, exposing their intimate details to the world; these are what you use in the PropertyCondition to distinguish an element from its siblings when you’re hunting for it using one of the Find methods.

Finding Elements to automate

Let me show you an example. We want to automate Paint.Net, so first we fire up an instance of the Paint.Net process:

private string PaintDotNetPath = @"C:\Program Files\Paint.NET\PaintDotNet.exe";

...

var processStartInfo = new ProcessStartInfo(paintDotNetPath);
var process = Process.Start(processStartInfo);

Having started it up, we wait for it to initialize (the Delay method simply calls Thread.Sleep with the appropriate timespan):

process.WaitForInputIdle();
Delay(4000);

At this point, Paint.Net is up on screen, waiting for us to start doodling. This is where the UIAutomation bit begins. We need to get hold of Paint.Net’s main Window. Since we know the Process Id of Paint.Net, we’ll use a PropertyCondition bound to the ProcessId property:

var mainWindow = AutomationElement.RootElement.FindChildByProcessId(process.Id);

You won’t find the FindChildByProcessId method on the AutomationElement class: it’s an extension method I’ve created to wrap the call to FindFirst:

public static class AutomationExtensions
{
   public static AutomationElement FindChildByProcessId(this AutomationElement element, int processId)
   {
       var result = element.FindChildByCondition(
           new PropertyCondition(AutomationElement.ProcessIdProperty, processId));

       return result;
   }

   public static AutomationElement FindChildByCondition(this AutomationElement element, Condition condition)
   {
       var result = element.FindFirst(
           TreeScope.Children,
           condition);

       return result;
   }
}

Having found the main screen, we need to dig into it to find the actual drawing canvas element. This is were we need UISpy (which comes as part of the Windows SDK). UISpy lays bare the automation tree of the desktop and the applications on it. You can use it to snoop at the properties of any AutomationElement on screen, and to make snooping a snap, it has a particularly helpful mode where you can Ctrl-Click an element on screen to locate the corresponding AutomationElement in the automation tree (click the mouse icon on the UISpy toolbar to activate this mode). Using these special powers it doesn’t take long to discover that the drawing canvas is an element with AutomationId property set to “surfaceBox”, and is a child of another element, with AutomationId set to “panel”, which in turn is a child of another element with [snip - I’ll spare you the details], which is a child of the Paint.Net main window.Spying on Paint.Net with UISpy

To assist in navigating this kind of hierarchy (a task you have to do all the time when automating any non-trivial application), I’ve cooked up the FindDescendentByIdPath extension method (the implementation of which is a whole other blog post). With that, finding the drawing canvas element is as simple as:

// find the Paint.Net drawing Canvas
var canvas = mainWindow.FindDescendentByIdPath(new[] {
    "appWorkspace",
      "workspacePanel",
        "DocumentView",
          "panel",
            "surfaceBox" });

Animating the Mouse

Now for the fun part. Do you remember Spirographs? They are mathematical toys for drawing pretty geometrical pictures. But have you ever tried drawing one freehand? Well here’s your chance to convince your friends that you have artistic talents surpassing Michelangelo’s.

Jürgen Köller has very kindly written up the mathematical equations that produce these pretty pictures, and I’ve translated them into a C# iterator that produces a sequence of points along the spirograph curve (don’t worry too much about littleR, bigR, etc. – they’re the parameters that govern the shape of the spirograph):

private IEnumerable<Point> GetPointsForSpirograph(int centerX, int centerY, double littleR, double bigR, double a, int tStart, int tEnd)
{
   // Equations from http://www.mathematische-basteleien.de/spirographs.htm
   for (double t = tStart; t < tEnd; t+= 0.1)
   {
       var rDifference = bigR - littleR;
       var rRatio = littleR / bigR;
       var x = (rDifference * Math.Cos(rRatio * t) + a * Math.Cos((1 - rRatio) * t)) * 25;
       var y = (rDifference * Math.Sin(rRatio * t) - a * Math.Sin((1 - rRatio) * t)) * 25;

       yield return new Point(centerX + (int)x, centerY + (int)y);
   }
}

So where are we? We have the Paint.Net canvas open on screen, and we have a set of points that we want to render. Conveniently for us, the default tool in Paint.Net is the brush tool. So to sketch out the spirograph, we just need to automate the mouse to move over the canvas, press the left button, move from point to point, and release the left button. As far as I know there’s no functionality built into the UIAutomation API to automate the mouse, but the WPF TestAPI (free to download from CodePlex) compensates for that. In its static Mouse class it provides Up, Down, and MoveTo methods that do all we need.

private void DrawSpirographWaveOnCanvas(AutomationElement canvasElement)
{
   var bounds = canvasElement.Current.BoundingRectangle;

   var centerX = (int)(bounds.X + bounds.Width /2);
   int centerY = (int)(bounds.Y + bounds.Height / 2);

   var points = GetPointsForSpirograph(centerX, centerY, 1.02, 5, 2, 0, 300);

   Mouse.MoveTo(points.First());
   Mouse.Down(MouseButton.Left);

   AnimateMouseThroughPoints(points);

   Mouse.Up(MouseButton.Left);
}

private void AnimateMouseThroughPoints(IEnumerable<Point> points)
{
   foreach (var point in points)
   {
       Mouse.MoveTo(point);
       Delay(5);
   }
}

Clicking Buttons

Once sufficient time has elapsed for your colleagues to admire the drawing, the last thing our automation script needs to do is tidy away – close down Paint.Net, in other words. This allows me to demonstrate another aspect of UIAutomation – how to manipulate elements on screen other than by simulating mouse moves and clicks.

Shutting down Paint.Net when there is an unsaved document requires two steps: clicking the Close button, and then clicking “Don’t Save” in the confirmation dialog box. As before, we use UISpy to discover the Automation Id of the Close button and its parents so that we can get a reference to the appropriate AutomationElement:

var closeButton = mainWindow.FindDescendentByIdPath(new[] {"TitleBar", "Close"});

Now that we have the button, we can get hold of its Invoke pattern. Depending on what kind of widget it represents, every AutomationElement makes available certain Patterns. These Patterns cover the kinds of interaction that are possible with that widget. So,for example, buttons (and button-like things such as hyperlinks) support the Invoke pattern with a method for Invoking the action, list items support the SelectionItem pattern with methods for selecting the item, or adding it to the selection, and Text Boxes support the Text pattern with methods for selecting a range of text and querying its attributes. On MSDN, you’ll find a full list of the available patterns.

To invoke the methods of a pattern on a particular AutomationElement, you need to get hold of a reference to the pattern implementation on the element. First you find the appropriate pattern meta-data. For the Invoke pattern, for example, this would be InvokePattern.Pattern; other patterns follow the same convention. Then you pass that meta-data to the GetCurrentPattern method on the AutomationElement class. When you’ve got a reference to the pattern implementation, you can go ahead an invoke the relevant methods.

Once again, I’ve made all this a bit easier by creating some extension methods (only the InvokePattern is shown here; extension methods for other patterns are available in the sample code):

public static class PatternExtensions
{
   public static InvokePattern GetInvokePattern(this AutomationElement element)
   {
       return element.GetPattern<InvokePattern>(InvokePattern.Pattern);
   }

   public static T GetPattern<T>(this AutomationElement element, AutomationPattern pattern) where T : class
   {
       var patternObject = element.GetCurrentPattern(pattern);

       return patternObject as T;
   }
}

With that I can now click the close button:

closeButton.GetInvokePattern().Invoke();

Then, after a short delay to allow the confirmation dialog to show up, I can click the Don’t Save button:

// give chance for the close dialog to show
Delay();

var dontSaveButton = mainWindow.FindDescendentByNamePath(new[] {"Unsaved Changes", "Don't Save"});

Mouse.MoveTo(dontSaveButton.GetClickablePoint().ToDrawingPoint());
Mouse.Click(MouseButton.Left);

For variation, I click this button by actually moving the mouse to its centre (line 6) then performing the click.

All the code is available on GitHub.

Bowing out

When I first read about UI Automation, I got the impression that it was rather complicated, with lots of code needed to make it do anything useful. I tried using Project White (a Thoughtworks sponsored wrapper around UIAutomation), thinking that would save me from the devilish details. It turned out the Project White introduced complexities of its own, and actually, UI Automation is pretty straightforward to use, especially when oiled with my extension methods. I’ve had a lot of fun using it to create automated tests of our product over the last couple of months.

Update

16/7/2014: following the demise of code.mdsn.microsoft.com, I’ve moved the code to GitHub. I’ve also updated it so that it works with Paint.Net 4.0