Showing posts with label How to. Show all posts
Showing posts with label How to. Show all posts

Wednesday, 17 September 2014

Text templating with … (gasp) … Excel!

Excel has many unlikely uses (did you know that Excel 97 contained a hidden Flight Simulator?). I’ve just reminded myself of another one: fast text templating.

I needed to convert a data structure that looked like this

image

into one that looked like this

image

In other words, I needed to convert from a delimited text string into XML elements.

It took me about 60 seconds in Excel. Here’s how:

  1. I copied the text containing all the keywords and pasted it into a single cell of an Excel spreadsheet
  2. I selected the cell, and clicked on the Text to Columns tool (you find this in the Data tab)image
  3. In the Text to Columns Wizard, I indicated that the data was Delimited, and then entered the appropriate delimiter – the pipe (‘|’) character in this case. When I clicked Finish, Excel split each keyword into its own cell, going across the sheet.image
  4. Next, I needed the keywords arranged vertically instead of horizontally – in rows instead of columns. So I selected all the columns by clicking the first cell, then pressing Ctrl+Shift+[Right Arrow] to select all the way to the last keyword. Ctrl-C copied all the keywords. After clicking in an empty cell, I selected the Paste Special option. In the Paste Special dialog, right at the bottom, you’ll find the Transpose option, which will convert columns of data into rows, and vice versa.image
  5. In the cell next to the first keyword, I typed a formula that would wrap the keyword in the XML boilerplate. To concatenate strings use ‘&’, and to include quotes in a string use a double quote. So the formula I needed was
    ="<Keyword Name=""" & A3 & """ />"
    image
  6. I selected that first cell again, and double-clicked the solid square at the bottom right of the cell’s selection border. This replicated the formula all the way down the page to the last row with something in it.
  7. Did the Ctrl-C/Ctrl-V dance to copy and paste the generated XML into my code file.
  8. Job’s a good ‘un!

Friday, 28 September 2012

A quick guide to Registration-Free COM in .Net–and how to Unit Test it

A couple of times recently I’ve needed to set up a .Net application to use Registration-Free COM, and each time I’ve had to hunt around to recall the details. Further, just this week I needed to write some unit tests that involve instantiating these un-registered COM objects, and that wasn’t straight forward. So, as much for the benefit of my future self as for you, my loyal reader, I’m going to summarise my know-how in quick blog post before it becomes used-to-know-how.

What is Registration-Free COM?

If you’re still reading, I’ll assume you know all about COM, Microsoft’s ancient technology for enabling components written in different languages to talk to each other (I wrote a little about it here, with some links to introductory articles). You are probably also aware of DLL Hell. That isn’t a place where bad executables are sent when they are terminated. Rather, it was a pain inflicted on developers by the necessity of registering COM components (and other DLLs) in a central place in the OS. Since all components were dumped into the same pool, one application could cause all kinds of hell for others by registering different versions of shared DLLs. The OS doesn’t police this pool, and it certainly doesn’t enforce compatibility, so much unexpected weird and wonderful behaviour was the result.

Starting with Windows XP, it has been possible to more-or-less escape this hell by not registering components in a central location, and instead using Registration-Free COM. This makes it much easier to deploy applications, because you can just copy a bunch of files – RegSvr32 is not involved, and there are no Registry keys to be written. You can be confident that your application will have no impact on others once installed.

It is all done using manifests.

Individual Manifest Files

For each dll, or ocx file (or ax files in my case – I’m working with DirectShow filters) containing COM components you need to create a manifest.

Suppose your dll is called MyCOMComponent.dll. Your manifest file should be called MyCOMComponent.sxs.manifest, and it should contain the following:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">

<assemblyIdentity
    type="win32"
    name="MyCOMComponent.sxs"
    version="1.0.0.0" />

<file name="MyCOMComponent.dll">
    <comClass
        description="MyCOMComponent"
        clsid="{AB12C3D4-567D-4156-802B-40A1387ADE61}"
        threadingModel="Both" />
</file>
</assembly>

Obviously you need to make sure that the clsid inside comClass is correct for your component. If you have more than one COM object in your dll you can add multiple comClass elements. For those not wanting to generate these manifests by hand, a StackOverflow answer lists some tools that might help.

About Deployment

When you deploy your application you should deploy both the dll/ocx/ax file and its manifest into the same directory as your .Net exe/dlls. When developing in Visual Studio, I customise the build process to make sure all these dlls get copied into the correct place for running and debugging the application. I stole the technique for doing this from the way ASP.Net MVC applications manage their dlls.

Put all the dlls and manifests into a folder called _bin_deployableAssemblies alongside the rest of your source code. Then modify your csproj file and add the following Target at the end of it:

<!--
  ============================================================
  CopyBinDeployableAssemblies

  This target copies the contents of ProjectDir\_bin_deployableAssemblies to the bin
  folder, preserving the relative paths
  ============================================================
  -->
<Target Name="CopyBinDeployableAssemblies" Condition="Exists('$(MSBuildProjectDirectory)\_bin_deployableAssemblies')">
  <CreateItem Include="$(MSBuildProjectDirectory)\_bin_deployableAssemblies\**\*.*" Condition="Exists('$(MSBuildProjectDirectory)\_bin_deployableAssemblies')">
    <Output ItemName="_binDeployableAssemblies" TaskParameter="Include" />
  </CreateItem>
  <Copy SourceFiles="@(_binDeployableAssemblies)" DestinationFolder="$(OutDir)\%(RecursiveDir)" SkipUnchangedFiles="true" Retries="$(CopyRetryCount)" RetryDelayMilliseconds="$(CopyRetryDelayMilliseconds)" />
</Target>

To make sure that target is called when you build, update the AfterBuild target (uncomment it first if you’re not currently using it):

 <Target Name="AfterBuild" DependsOnTargets="MyOtherTarget;CopyBinDeployableAssemblies" />

The Application Manifest

Now you need to make sure your application declares its dependencies.

First add an app.manifest file to your project, if you haven’t already got one. To do this in Visual Studio, right click the project, select Add –> New Item … and then choose Application Manifest File. Having added the manifest, you need to ensure it is compiled into your executable. You do this by right-clicking the project, choosing Properties, then going to the Application tab. In the resources section you’ll see a Manifest textbox: make sure your app.manifest file is selected.

image

Now you need to add a section to the app.manifest file for each dependency.

By default your app.manifest file will probably already have a dependency for the Windows Common Controls. After that (so, nested directly inside the root element) you should add the following for each of the manifest files you created earlier:

<dependency>
  <dependentAssembly>
    <assemblyIdentity
        type="win32"
        name="MyCOMComponent.sxs"
        version="1.0.0.0" />
  </dependentAssembly>
</dependency>

Notice that we drop the “.manifest” off the end of the manifest file name when we refer to it here. The other important thing is that the version number here and the one in the manifest file should exactly match, though I don’t think there’s any reason to change it from 1.0.0.0.

Disabling the Visual Studio Hosting Process

There’s just one more thing to do before you try running your application, and that is to turn off the Visual Studio hosting process. The hosting process apparently helps improve debugging performance, amongst other things (though I’ve not noticed greatly decreased performance with it disabled). The problem is that, when enabled, application executables are not loaded directly- rather, they are loaded by an intermediary executable with a name ending .vshost.exe. The upshot is that the manifest embedded in your exe is ignored, and COM components are not loaded.

Disabling the hosting process is simple:  go to the Debug tab of your project’s Properties and uncheck “Enable the Visual Studio hosting process

image

With everything set up, you’ll want to try running your application. If you got everything right first time, everything will go smoothly. If not you might see an error like this:

image

If you do, check Windows’ Application event log for errors coming from SideBySide. These are usually pretty helpful in telling you which part of your configuration has a problem.

Summary

To re-cap briefly, here are the steps to enabling Registration-Free COM for you application:

  1. Create a manifest file for each COM dll
  2. Make sure both COM dlls and manifest files are deployed alongside your main executable
  3. Add a manifest file to your executable which references each individual manifest file
  4. Make sure you turn off the Visual Studio hosting process before debugging

Unit Testing and Registration-Free COM

And now, as promised, a word about running Unit Tests when Registration-Free COM is involved.

If you have a Unit Test which tries to create a Registration-Free COM object you’ll probably get an exception like

Retrieving the COM class factory for component with CLSID {1C123B56-3774-4EE4-A482-512B3AB7CABB} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).

If you don’t get this error, it’s probably because the component is still registered centrally on your machine. Running regsvr32 /u [Path_to_your_dll] will unregister it.

Why do Unit Tests fail, when the application works? It is for the same reason that the Visual Studio hosting process breaks Registration-Free COM: your unit tests are actually being run in a different process (for example, the Resharper.TaskRunner), and the manifest file which you so carefully crafted for your exe is being ignored. Only the manifest on the entry executable is taken into account, and since that’s a generic unit test runner it says nothing about your COM dependencies.

But there’s a workaround. Win32 has some APIs –the Activation Context APIs- which allow you to manually load up a manifest for each thread which needs to create COM components. Spike McLarty has written some code to make these easy to use from .Net, and I’ll show you a technique to incorporate this into your code so that it works correctly whether called from unit tests or not.

Here’s Spike’s code, with a few minor modifications of my own:

/// <remarks>
/// Code from http://www.atalasoft.com/blogs/spikemclarty/february-2012/dynamically-testing-an-activex-control-from-c-and
/// </remarks>
class ActivationContext
{
    static public void UsingManifestDo(string manifest, Action action)
    {
        UnsafeNativeMethods.ACTCTX context = new UnsafeNativeMethods.ACTCTX();
        context.cbSize = Marshal.SizeOf(typeof(UnsafeNativeMethods.ACTCTX));
        if (context.cbSize != 0x20)
        {
            throw new Exception("ACTCTX.cbSize is wrong");
        }
        context.lpSource = manifest;

        IntPtr hActCtx = UnsafeNativeMethods.CreateActCtx(ref context);
        if (hActCtx == (IntPtr)(-1))
        {
            throw new Win32Exception(Marshal.GetLastWin32Error());
        }
        try // with valid hActCtx
        {
            IntPtr cookie = IntPtr.Zero;
            if (!UnsafeNativeMethods.ActivateActCtx(hActCtx, out cookie))
            {
                throw new Win32Exception(Marshal.GetLastWin32Error());
            }
            try // with activated context
            {
                action();
            }
            finally
            {
                UnsafeNativeMethods.DeactivateActCtx(0, cookie);
            }
        }
        finally
        {
            UnsafeNativeMethods.ReleaseActCtx(hActCtx);
        }
    }

    [SuppressUnmanagedCodeSecurity]
    internal static class UnsafeNativeMethods
    {
        // Activation Context API Functions
        [DllImport("Kernel32.dll", SetLastError = true, EntryPoint = "CreateActCtxW")]
        internal extern static IntPtr CreateActCtx(ref ACTCTX actctx);

        [DllImport("Kernel32.dll", SetLastError = true)]
        [return: MarshalAs(UnmanagedType.Bool)]
        internal static extern bool ActivateActCtx(IntPtr hActCtx, out IntPtr lpCookie);

        [DllImport("kernel32.dll", SetLastError = true)]
        [return: MarshalAs(UnmanagedType.Bool)]
        internal static extern bool DeactivateActCtx(int dwFlags, IntPtr lpCookie);

        [DllImport("Kernel32.dll", SetLastError = true)]
        internal static extern void ReleaseActCtx(IntPtr hActCtx);

        // Activation context structure
        [StructLayout(LayoutKind.Sequential, Pack = 4, CharSet = CharSet.Unicode)]
        internal struct ACTCTX
        {
            public Int32 cbSize;
            public UInt32 dwFlags;
            public string lpSource;
            public UInt16 wProcessorArchitecture;
            public UInt16 wLangId;
            public string lpAssemblyDirectory;
            public string lpResourceName;
            public string lpApplicationName;
            public IntPtr hModule;
        }

    }
}

The method UsingManifestDo allows you to run any code of your choosing with an Activation Context loaded from a manifest file. Clearly we only need to invoke this when our code is being called from a Unit Test. But how do we structure code elegantly so that it uses the activation context when necessary, but not otherwise? Here’s my solution:

public static class COMFactory
{
   private static Func<Func<object>, object> _creationWrapper = function => function();

   public static T CreateComObject<T>() where T:new()
   {
       var instance = (T)_creationWrapper(() => new T());
       return instance;
   }

   public static object CreateComObject(Guid guid)
   {
       Type type = Type.GetTypeFromCLSID(guid);
       var instance = _creationWrapper(() => Activator.CreateInstance(type));

       return instance;
   }

   public static void UseManifestForCreation(string manifest)
   {
       _creationWrapper = function =>
           {
               object result = null;
               ActivationContext.UsingManifestDo(manifest, () => result = function());
               return result;
           };
   }
}

Whenever I need to create a COM Object in my production code, I do it by calling COMFactory.CreateCOMObject. By default this will create the COM objects directly, relying on the manifest which is embedded in the executable.

But in my Test project, before running any tests I call COMFactory.UseManifestForCreation and pass in the path to the manifest file. This ensures that the manifest gets loaded up before we try to create any COM objects in the tests.

To avoid duplicating the manifest file, I share the same file between my Test project and main executable project. You can do this right clicking your test project, choosing Add->Existing Item… then app.manifest in your main project. Finally, click the down arrow on the Add split button, and choose Add as Link.

If you’ve got any tips to share on using Registration-Free COM, whether in Unit Tests or just in applications, please do leave a comment.

Monday, 7 February 2011

How to deploy to, and debug, multiple instances of the Windows Phone 7 emulator

I’m developing a multi-player Windows Phone 7 game. Now I don’t know about you, but I find it hard to test a multi-player application when I’m only allowed to run one instance of it. And that seemed to be the case with Windows Phone 7 applications. Microsoft provide an Emulator, but it’s a single-instance application: however many times you click its icon, you only get the one window.

Googling found me a useful article on how to run multiple instances of the emulator. But it didn’t tell me how to deploy applications to them, or how to debug those applications. There was however, a post in the forums, somewhat reminiscent of Monsieur de Fermat scribbling's, that gave me hope that what I wanted to do was indeed possible.

So I set out on a journey of discovery.

About an hour later, I had this,imagethis, imageand thisMultiple Instances

Step by Step instructions

Disclaimer: what I am about to show is completely unsupported by Microsoft or me. Continue at your own risk. Here be dragons.

  1. Open the folder [Your Drive Letter]:\ProgramData\Microsoft\Phone Tools\CoreCon\10.0\addons
  2. Locate the file ImageConfig.en-US.xsl
  3. Take a copy of it, leaving it in the same directory, and name it something like ImageConfig.en-US 2nd Instance.xsl
  4. Open the copy in your text editor of choice.
  5. Locate the element DEVICEDEVICEElement
  6. Change the Name attribute, and assign a new value to ID – you can use the Online Guid Generator if you can’t think of one off the top of your head.
  7. Scroll down the file to locate the part that says PROPERTY ID=”VMID”:VIMD
  8. Put a new Guid inside that element – make sure though that you use capital letters rather than lower case.
  9. Save the file
  10. That’s it. Re open the XAP deployment tool, or Visual Studio, if you already have them open, and you’ll see your new Emulator instances.

A Bonus Visual Studio tip

To debug multiple instances of your Windows Phone 7 application you can do the following:

  1. Start the first instance as usual.
  2. Change the Deployment Device to your newly-minted 2nd Emulator: image
  3. To start the 2nd instance, right-click on your project, go to the Debug menu item, then select Start new instance:image

4. Prepare a wet towel and a darkened room in preparation for the multi-player debugging experience.

Tuesday, 2 November 2010

Using Ready-made Virtual PCs to try out the Visual Studio Async CTP

The Visual Studio Async CTP containing the preview of C# 5.0 which was all the talk at Microsoft PDC 2010 installs on top of Visual Studio 2010. If, like me, you’re wary of trashing your main development machine, you might like to try out the ready made Visual Studio Virtual Machines that Microsoft provide. They are a pretty hefty download (over 9 GB), but I set my download manager to work on the task just after sitting down at my desk this morning, and it was done by mid-afternoon.

Having a Windows Server 2008 Virtual Machine ready built with Visual Studio 2010 (and 2008), Team Foundation Server 2010, SQL Server 2008 and Microsoft Office is a huge time-saver. Though they are all time-limited versions, you can apparently use your MSDN product keys to activate them into perpetual mode.

They come in three flavours, one for Hyper-V, one for Virtual PC 2007, and one for Windows 7 Virtual PC. If you hop over to Brian Keller’s blog he has a some lists of URLs that you can paste into your download manager to make the download job pretty painless, and then some instructions to get you started. One thing to note, once you’ve booted the machine, is that all the user accounts have password “P2ssw0rd”.

Having booted into the Virtual PC, you will need to install the Silverlight 4 SDK and then, of course, the Visual Studio Async CTP itself. Both are pretty small downloads and quite quick to install.

Have fun!

Wednesday, 20 October 2010

Quick Fix: Silverlight Application showing as a blank screen

I just diagnosed and fixed a problem with my Silverlight application that I’ve hit often enough to make me think it’s worth sharing my approach.

The problem was a simple one. When I deployed my application, and browsed to a page that should have displayed a Silverlight control, all I saw was a blank page.

So I fired up the diagnoser of all problems Http, the excellent Fiddler, and reloaded my page. This is what I saw:

FiddlerMissingFile

In glaring red, Fiddler shows that my Silverlight control made a request to download one of its dependencies (which are all packaged in zip files and stored in the ClientBin folder), and was told it was not available – a HTTP 404 error.

So the fix was obvious: just add the missing file to my installer. Sorted!

Thursday, 20 May 2010

A Primer on exposing .Net to COM

Four days ago, I jumped in a time machine, dialled back two millennia, and emerged in the medieval period of programming history, sometime during the reign of King COM. Now, returning, I present to you my rough notes on how you, a citizen of the brave new .Net utopia can communicate with the denizens of the dark ages.

Brace yourself

You’ll want to start by understanding the basic principles on which COM is built. There is a series of articles on CodeProject which form a good introduction. Here are links to Part 1 and Part 2. A post on StackOverflow provides an index to the remainder. Reading those articles, I am, once again, amazed at the ingenuity of the ancients!

Consuming COM in .Net

Consuming COM in .Net - well “it’s trivial” (as my old Maths lecturer used to say, having written a partial differential equation on the blackboard, looking to us expectantly for a solution). And in .Net 4.0 it has just got trivialler thanks to dynamic types and No PIA. You just right click on your project, click  Add Reference…, then select the appropriate library in the COM tab. All the types will then be available to you as if they were regular .Net types

Exporting .Net to COM

Where it gets interesting is when you want to make a .Net type available to COM. Here’s what I’ve learnt.

  • Create yourself a new Class library project. In the AssemblyInfo file, make sure that the ComVisible attribute is given the value false – you can then be selective in what is visible. Also, make sure that a Guid has been created.
  • Start by defining an interface defining the methods that you want your object to expose to COM. Decorate this interface with the attribute ComVisible(true).
  • Implement the interface in your object, and decorate the class with [ComVisible(true)] and [ClassInterface(ClassInterfaceType.None)]
  • Open the project properties, go to the Build tab, and tick Register for COM interop. This will ensure that the the necessary Type Library (.tlb) file gets created in the /bin directory
  • To check how your type will appear to COM, you can browse the tlb file in the Visual Studio Object Browser – but note that this will lock the tlb file, so you won’t be able to rebuild your project whilst it remains open in the browser.
  • I’ve noticed in VS2010 that the tlb file sometimes doesn’t seem to get updated: a Clean and a Rebuild soon fixed that.

An example:

using System;
using System.Runtime.InteropServices;

namespace ComLibrary
{
    [ComVisible(true)]
    public interface IMainType
    {
        int GetInt();

        void StartTiming();

        int StopTiming();
    }

    [ComVisible(true)]
    [ClassInterface(ClassInterfaceType.None)]
    public class MainType : IMainType
    {
        private Stopwatch _stopWatch;

        public int GetInt()
        {
            return 42;
        }

        public void StartTiming()
        {
            _stopWatch = new Stopwatch();
            _stopWatch.Start();
        }

        public int StopTiming()
        {
            return (int)_stopWatch.ElapsedMilliseconds;
        }
    }
}

Using the exported COM object

I last touched C++ about 15 years ago. I’ve never programmed with COM in C++. With that valuation of my advice, here is how I went about calling my COM object from C++.

  • Remember to call CoInitialize(NULL) on each thread from which you call COM objects. Before the thread exits you should also call CoUninitialize()
  • Copy the tlb from the /bin directory of your .Net project into your C++ project. At the top of the file where you want to call your COM object put #import “<name of your tlb file>.tlb”.
  • Define a smart pointer to your type: this takes care of all the AddRef/ReleaseRef stuff which you should have read about in the introductory article I pointed you to: for example
    COMLibrary::IMainTypePtr myType;
  • Create an instance of your object:
    myType.CreateInstance(__uuidof(COMLibrary::MainType));
  • Use it:
    myType->GetInt();
  • During development, you might find that, having updated your tlb file, new or changed members don’t appear on the C++ side. There are two things to try here:
    1. In the Debug folder of your C++ project look for the two files <your typelib name>.tlh and <your typelib name>.tli and delete them. Then rebuild your project.
    2. If the project compiles but you get intellisense errors, try closing down the solution and deleting the intellisense cache files. These are located next to the solution file, and have extension sdf (for VS 2010) or ncb (for VS 2008 and earlier)

Here’s a fuller sample for the C++ side

#include "stdafx.h"
#include <iostream>
#import "ComLibrary.tlb"


int _tmain(int argc, _TCHAR* argv[])
{
    CoInitialize(NULL);

    ComLibrary::IMainTypePtr myType;
    myType.CreateInstance(__uuidof(ComLibrary::MainType));

    
    myType->StartTiming();

    for (long long i=0; i < 1000000; i++)
    {
        myType->GetInt();
    }

    long timeInMilliseconds = myType->StopTiming();

    printf("%d", timeInMilliseconds);
    std::cin.get();

    CoUninitialize();
    return 0;
}

Other notes

  • If you try using your exported .Net type in VBA or VB6 and you get the error “Function or interface marked as restricted, or the function uses an Automation type not supported in Visual Basic”, then check the parameter and return types on the interfaces you are exposing. I got this error when trying to return a .Net long – a 64-bit integer. VBA can’t count that far: it’s longs are only 32-bit.
  • Primitive types appear to translate fairly straight-forwardly between .Net and COM. Things start to get hairier when arrays become involved. On the COM side these become SAFEARRAYS, and look like being a right pain to deal with, somewhat mitigated by the CComSafeArray wrapper class.

Wednesday, 23 December 2009

Specifying Resource Keys using Data Binding in WPF

Imagine you’re wanting to show a list of things in an ItemsControl, with each item having a different image. Using WPF’s implicit Data Templating support, and giving each item Type its own Data Template is one way of implementing this: but if there are many items, and the image is the only thing that’s different in each case, and the Data Template is of any complexity, your code will soon start to suffer DRY rot.

You could just pinch your nose and put the image in your ViewModel so that it can be databound in the normal way. Of course, images should really live in a ResourceDictionary: but how can you pick a resource out of a ResourceDictionary using data binding? Let me show you.

The example: AutoTherapist

Here’s what I want to build:ResourceBindingSampleImage

I’ve got a very simple ViewModel with a property exposing a list of the commands that sit behind the buttons in my Window:

public class WindowViewModel
{
    public IList<ICommand> Commands
    {
        get
        {
            return new ICommand[] { new AngryCommand(), new HappyCommand(), new CoolCommand() };
        }
    }
}

And here’s the relevant part of the View:

<ItemsControl Grid.Row="1" ItemsSource="{Binding Commands}">
    <ItemsControl.ItemTemplate>
      <DataTemplate>
        <Button Command="{Binding}" Padding="2" Margin="2" Width="100" Height="100">
          <StackPanel>
            <Image HorizontalAlignment="Center"
                   Width="60"
                   app:ResourceKeyBindings.SourceResourceKeyBinding="{Binding Converter={StaticResource ResourceKeyConverter}, ConverterParameter=Image.{0}}"/>
            <TextBlock Text="{Binding Name}" HorizontalAlignment="Center" FontWeight="Bold" Margin="0,2,0,0"/>
          </StackPanel>
        </Button>
      </DataTemplate>
    </ItemsControl.ItemTemplate>
    <ItemsControl.ItemsPanel>
      <ItemsPanelTemplate>
        <StackPanel Orientation="Horizontal"/>
      </ItemsPanelTemplate>
    </ItemsControl.ItemsPanel>
</ItemsControl>

The ItemsControl is bound to the Commands property on my ViewModel. Each command is rendered with the same DataTemplate. The magic happens in line 8 where I’m using the attached property ResourceKeyBindings.SourceResourceKeyBinding. This property allows me to data-bind the key of the resource I want to use for the Image.Source property. I’ll show you how that works in a minute, but first: where are the resource keys coming from?

You’ll notice that, since there’s no Path specified in the Binding, we’re binding directly to the data object - in this case, one of the commands. Then we’re using a converter to turn that object into the appropriate key. Here’s the code for the converter:

class TypeToResourceKeyConverter : IValueConverter
{
    public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
    {
        var formatString = parameter as string;
        var type = value.GetType();
        var typeName = type.Name;

        var result = string.Format(formatString, typeName);

        return result;
    }

    public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
    {
        throw new NotImplementedException();
    }
}

What this is doing is getting the name of the Type of the data object and pushing that through the format string given as the parameter to the converter. Given the way the converter is set up in our case, this will produce resource keys like “Image.AngryCommand”, “Image.HappyCommand”, etc.

So now all we need to make the AutoTherapist work is to define those resources. Here’s App.xaml:

<Application x:Class="ResourceKeyBindingSample.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    StartupUri="Window1.xaml">
    <Application.Resources>
      <BitmapImage x:Key="Image.AngryCommand" UriSource="Angry.png"/>
      <BitmapImage x:Key="Image.CoolCommand" UriSource="Cool.png"/>
      <BitmapImage x:Key="Image.HappyCommand" UriSource="Happy.png"/>
    </Application.Resources>
</Application>

(The icons are from VistaIcons.com, by the way).

The implementation

So what does that attached property look like? It’s actually rather simple:

public static class ResourceKeyBindings
{
    public static DependencyProperty SourceResourceKeyBindingProperty = ResourceKeyBindingPropertyFactory.CreateResourceKeyBindingProperty(
        Image.SourceProperty,
        typeof(ResourceKeyBindings));

    public static void SetSourceResourceKeyBinding(DependencyObject dp, object resourceKey)
    {
        dp.SetValue(SourceResourceKeyBindingProperty, resourceKey);
    }

    public static object GetSourceResourceKeyBinding(DependencyObject dp)
    {
        return dp.GetValue(SourceResourceKeyBindingProperty);
    }
}

As you can see, I’ve factored out the magic into ResourceKeyBindingPropertyFactory. This makes it easy to create equivalent properties for any other target property (in the download, for example, I’ve made a StyleResourceKeyBinding property for binding to FrameworkElement.Style). ResourceKeyBindingPropertyFactory looks like this:

public static class ResourceKeyBindingPropertyFactory
{
     public static DependencyProperty CreateResourceKeyBindingProperty(DependencyProperty boundProperty, Type ownerClass)
     {
         var property = DependencyProperty.RegisterAttached(
             boundProperty.Name + "ResourceKeyBinding",
             typeof(object),
             ownerClass,
             new PropertyMetadata(null, (dp, e) =>
             {
                 var element = dp as FrameworkElement;
                 if (element == null)
                 {
                     return;
                 }

                 element.SetResourceReference(boundProperty, e.NewValue);
             }));

         return property;
     }
}

All we do here is register an attached property and set it up with a PropertyChanged handler: the handler simply takes the new value of the property – which in our case will be the resource key – and passes it to SetResourceReference along with the target property. SetResourceReference is the programmatic equivalent of using DynamicResource in XAML – it looks up the appropriate resource (from the current element’s ResourceDictionary or one of its ancestors’) and assigns it to the given property.

So there you have it: data binding for Resource Keys. Full source code for this sample is available from the MSDN Code Gallery.

Not for Silverlight

Unfortunately I don’t think it is possible to port this to Silverlight without a lot of work because Silverlight has no support for Dynamic Resources. From a cursory look, I don’t think there is even runtime support built in for finding a Resource in the Resource Dictionary chain up the ancestor tree of an element as there is in WPF: it looks like the Silverlight parser is responsible for doing this when required by StaticResource references. I would be delighted if someone could show me otherwise

Tuesday, 4 August 2009

An introduction to UI Automation – with spooky spirographs

A few weeks ago, I unearthed a hidden gem in the .Net framework: the UIAutomation API. UI Automation made its entrance as part of .Net 3.0, but was overshadowed by the trio of W*Fs – if they’d named it Windows Automation Foundation it might have received more love! UIAutomation provides a robust way of poking, prodding and perusing any widget shown on the Windows desktop; it even works with Silverlight. It can be used for many things, like building Screen Readers, writing automated UI tests – or for creating a digital spirit to spook your colleagues by possessing Paint.Net and sketching spirographs.

Spirograph in Paint.Net

The UI Automation framework reduces the entire contents of the Windows Desktop to a tree of AutomationElement objects. Every widget of any significance, from Windows, to Menu Items, to Text Boxes is represented in the tree. The root of this tree is the Desktop window. You get hold of it, logically enough, using the static AutomationElement.RootElement property. From there you can traverse your way down the tree to just about any element on screen using two suggestively named methods on AutomationElement, FindAll and FindFirst.

Each of these two methods takes a Condition instance as a parameter, and it uses this to pick out the elements you’re looking for. The most useful kind of condition is a PropertyCondition. AutomationElements each have a number of properties like Name, Class, AutomationId, ProcessId, etc, exposing their intimate details to the world; these are what you use in the PropertyCondition to distinguish an element from its siblings when you’re hunting for it using one of the Find methods.

Finding Elements to automate

Let me show you an example. We want to automate Paint.Net, so first we fire up an instance of the Paint.Net process:

private string PaintDotNetPath = @"C:\Program Files\Paint.NET\PaintDotNet.exe";

...

var processStartInfo = new ProcessStartInfo(paintDotNetPath);
var process = Process.Start(processStartInfo);

Having started it up, we wait for it to initialize (the Delay method simply calls Thread.Sleep with the appropriate timespan):

process.WaitForInputIdle();
Delay(4000);

At this point, Paint.Net is up on screen, waiting for us to start doodling. This is where the UIAutomation bit begins. We need to get hold of Paint.Net’s main Window. Since we know the Process Id of Paint.Net, we’ll use a PropertyCondition bound to the ProcessId property:

var mainWindow = AutomationElement.RootElement.FindChildByProcessId(process.Id);

You won’t find the FindChildByProcessId method on the AutomationElement class: it’s an extension method I’ve created to wrap the call to FindFirst:

public static class AutomationExtensions
{
   public static AutomationElement FindChildByProcessId(this AutomationElement element, int processId)
   {
       var result = element.FindChildByCondition(
           new PropertyCondition(AutomationElement.ProcessIdProperty, processId));

       return result;
   }

   public static AutomationElement FindChildByCondition(this AutomationElement element, Condition condition)
   {
       var result = element.FindFirst(
           TreeScope.Children,
           condition);

       return result;
   }
}

Having found the main screen, we need to dig into it to find the actual drawing canvas element. This is were we need UISpy (which comes as part of the Windows SDK). UISpy lays bare the automation tree of the desktop and the applications on it. You can use it to snoop at the properties of any AutomationElement on screen, and to make snooping a snap, it has a particularly helpful mode where you can Ctrl-Click an element on screen to locate the corresponding AutomationElement in the automation tree (click the mouse icon on the UISpy toolbar to activate this mode). Using these special powers it doesn’t take long to discover that the drawing canvas is an element with AutomationId property set to “surfaceBox”, and is a child of another element, with AutomationId set to “panel”, which in turn is a child of another element with [snip - I’ll spare you the details], which is a child of the Paint.Net main window.Spying on Paint.Net with UISpy

To assist in navigating this kind of hierarchy (a task you have to do all the time when automating any non-trivial application), I’ve cooked up the FindDescendentByIdPath extension method (the implementation of which is a whole other blog post). With that, finding the drawing canvas element is as simple as:

// find the Paint.Net drawing Canvas
var canvas = mainWindow.FindDescendentByIdPath(new[] {
    "appWorkspace",
      "workspacePanel",
        "DocumentView",
          "panel",
            "surfaceBox" });

Animating the Mouse

Now for the fun part. Do you remember Spirographs? They are mathematical toys for drawing pretty geometrical pictures. But have you ever tried drawing one freehand? Well here’s your chance to convince your friends that you have artistic talents surpassing Michelangelo’s.

Jürgen Köller has very kindly written up the mathematical equations that produce these pretty pictures, and I’ve translated them into a C# iterator that produces a sequence of points along the spirograph curve (don’t worry too much about littleR, bigR, etc. – they’re the parameters that govern the shape of the spirograph):

private IEnumerable<Point> GetPointsForSpirograph(int centerX, int centerY, double littleR, double bigR, double a, int tStart, int tEnd)
{
   // Equations from http://www.mathematische-basteleien.de/spirographs.htm
   for (double t = tStart; t < tEnd; t+= 0.1)
   {
       var rDifference = bigR - littleR;
       var rRatio = littleR / bigR;
       var x = (rDifference * Math.Cos(rRatio * t) + a * Math.Cos((1 - rRatio) * t)) * 25;
       var y = (rDifference * Math.Sin(rRatio * t) - a * Math.Sin((1 - rRatio) * t)) * 25;

       yield return new Point(centerX + (int)x, centerY + (int)y);
   }
}

So where are we? We have the Paint.Net canvas open on screen, and we have a set of points that we want to render. Conveniently for us, the default tool in Paint.Net is the brush tool. So to sketch out the spirograph, we just need to automate the mouse to move over the canvas, press the left button, move from point to point, and release the left button. As far as I know there’s no functionality built into the UIAutomation API to automate the mouse, but the WPF TestAPI (free to download from CodePlex) compensates for that. In its static Mouse class it provides Up, Down, and MoveTo methods that do all we need.

private void DrawSpirographWaveOnCanvas(AutomationElement canvasElement)
{
   var bounds = canvasElement.Current.BoundingRectangle;

   var centerX = (int)(bounds.X + bounds.Width /2);
   int centerY = (int)(bounds.Y + bounds.Height / 2);

   var points = GetPointsForSpirograph(centerX, centerY, 1.02, 5, 2, 0, 300);

   Mouse.MoveTo(points.First());
   Mouse.Down(MouseButton.Left);

   AnimateMouseThroughPoints(points);

   Mouse.Up(MouseButton.Left);
}

private void AnimateMouseThroughPoints(IEnumerable<Point> points)
{
   foreach (var point in points)
   {
       Mouse.MoveTo(point);
       Delay(5);
   }
}

Clicking Buttons

Once sufficient time has elapsed for your colleagues to admire the drawing, the last thing our automation script needs to do is tidy away – close down Paint.Net, in other words. This allows me to demonstrate another aspect of UIAutomation – how to manipulate elements on screen other than by simulating mouse moves and clicks.

Shutting down Paint.Net when there is an unsaved document requires two steps: clicking the Close button, and then clicking “Don’t Save” in the confirmation dialog box. As before, we use UISpy to discover the Automation Id of the Close button and its parents so that we can get a reference to the appropriate AutomationElement:

var closeButton = mainWindow.FindDescendentByIdPath(new[] {"TitleBar", "Close"});

Now that we have the button, we can get hold of its Invoke pattern. Depending on what kind of widget it represents, every AutomationElement makes available certain Patterns. These Patterns cover the kinds of interaction that are possible with that widget. So,for example, buttons (and button-like things such as hyperlinks) support the Invoke pattern with a method for Invoking the action, list items support the SelectionItem pattern with methods for selecting the item, or adding it to the selection, and Text Boxes support the Text pattern with methods for selecting a range of text and querying its attributes. On MSDN, you’ll find a full list of the available patterns.

To invoke the methods of a pattern on a particular AutomationElement, you need to get hold of a reference to the pattern implementation on the element. First you find the appropriate pattern meta-data. For the Invoke pattern, for example, this would be InvokePattern.Pattern; other patterns follow the same convention. Then you pass that meta-data to the GetCurrentPattern method on the AutomationElement class. When you’ve got a reference to the pattern implementation, you can go ahead an invoke the relevant methods.

Once again, I’ve made all this a bit easier by creating some extension methods (only the InvokePattern is shown here; extension methods for other patterns are available in the sample code):

public static class PatternExtensions
{
   public static InvokePattern GetInvokePattern(this AutomationElement element)
   {
       return element.GetPattern<InvokePattern>(InvokePattern.Pattern);
   }

   public static T GetPattern<T>(this AutomationElement element, AutomationPattern pattern) where T : class
   {
       var patternObject = element.GetCurrentPattern(pattern);

       return patternObject as T;
   }
}

With that I can now click the close button:

closeButton.GetInvokePattern().Invoke();

Then, after a short delay to allow the confirmation dialog to show up, I can click the Don’t Save button:

// give chance for the close dialog to show
Delay();

var dontSaveButton = mainWindow.FindDescendentByNamePath(new[] {"Unsaved Changes", "Don't Save"});

Mouse.MoveTo(dontSaveButton.GetClickablePoint().ToDrawingPoint());
Mouse.Click(MouseButton.Left);

For variation, I click this button by actually moving the mouse to its centre (line 6) then performing the click.

All the code is available on GitHub.

Bowing out

When I first read about UI Automation, I got the impression that it was rather complicated, with lots of code needed to make it do anything useful. I tried using Project White (a Thoughtworks sponsored wrapper around UIAutomation), thinking that would save me from the devilish details. It turned out the Project White introduced complexities of its own, and actually, UI Automation is pretty straightforward to use, especially when oiled with my extension methods. I’ve had a lot of fun using it to create automated tests of our product over the last couple of months.

Update

16/7/2014: following the demise of code.mdsn.microsoft.com, I’ve moved the code to GitHub. I’ve also updated it so that it works with Paint.Net 4.0

Thursday, 12 February 2009

How to Databind to a SelectedItems property in WPF

Many moons ago, I asked on the WPF forums if anybody had a way of data-binding the SelectedItems property of a ListBox. Standard data binding doesn’t work, because the SelectedItems property is read-only, and understandably so: how would you like it if I injected an arbitrary collection into your ListBox and expected you to keep it up to date as the selection changed?

As no one gave me an answer, I was forced to use my gray-matter and invent a solution of my own. Since a similar question came up again recently on Stack Overflow, I thought I’d share what I came up with.

In my solution, I define an attached property that you attach to a ListBox (or DataGrid, or anything that inherits from MultiSelector) and allows you to specify a collection (via data binding, of course) that you want to be kept in sync with the SelectedItems collection of the target. To work properly, the collection you give should implement INotifyCollectionChanged – using ObservableCollection<T> should do the trick. When you set the property, I instantiate another class of my invention, a TwoListSynchronizer. This listens to CollectionChanged events in both collections, and when either changes, it updates the other.

To help me demonstrate, I’ve knocked up a toy ViewModel: it has on it an AvailableNames property that will be the data source for the ListBox, a SelectedNames property which is the reason we’re going through the whole exercise, and a Summary property which displays the number of selected items to prove that the data binding is working correctly.  I won’t bore you with the code – see the download link below.

The XAML looks like this:

<Window x:Class="SelectedItemsBindingDemo.Window1"
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
  xmlns:d="clr-namespace:SelectedItemsBindingDemo"
  xmlns:ff="clr-namespace:FunctionalFun.UI.Behaviours"
  Title="SelectedItems Binding Demo" Height="300" Width="300"
      >
<Window.DataContext>
  <d:ViewModel/>
</Window.DataContext>
  <Grid Margin="5">
    <Grid.RowDefinitions>
      <RowDefinition Height="*"/>
      <RowDefinition Height="Auto"/>
    </Grid.RowDefinitions>
    <TextBlock Grid.Row="1" HorizontalAlignment="Left" VerticalAlignment="Center"
               Text="{Binding Summary}" FontWeight="Bold" Margin ="0,5,5,5"/>
    <Button Content="Select All" Command="{Binding SelectAll}" Grid.Row="1"
            HorizontalAlignment="Right" VerticalAlignment="Center"/>
    <ListBox Grid.Row="0"
             ItemsSource="{Binding AvailableNames}"
             ff:MultiSelectorBehaviours.SynchronizedSelectedItems="{Binding SelectedNames}"
             SelectionMode="Extended" />
  </Grid>
</Window>

All the magic is initiated by line 22 where I set the attached property MultiSelectorBehaviours.SynchronizedSelectedItems using a data binding to my SelectedNames property. With that in place the SelectedNames collection in my ViewModel is updated whenever SelectedItems on the ListBox changes. And to prove it works the other way to, I’ve created a SelectAll command on the ViewModel that puts all available names into the SelectedNames collection. When you press the button you’ll see that the ListBox obediently updates to show all items selected.

The TwoListSynchronizer code does not make especially thrilling reading, so I won’t show it here. If you should peruse it however, there are a couple of things you might wonder about:

  • I’ve made it implement IWeakEventListener so that it can subscribe to CollectionChanged events using weak events. This means I don’t have to worry about creating managed memory leaks if the source SelectedItems collection has a longer lifetime than the ListBox.
  • The code occasionally refers to something called a IListItemConverter. I’m not make use of this in this application of the TwoListSynchronizer, but it might have uses elsewhere. The scenario I had in mind was where there is some kind of mapping between the items in the two collections that need to be synchronized.

Update (25/07/2013): All the code is now available on Github (under a Public Domain License). Download a zip file directly here.

Tuesday, 7 October 2008

Doing Planning the Agile way

So we're going to use Agile to manage the development of our secret new product. What does that actually entail? I'm not qualified to say for the general case, but I can show and tell how we've been doing it on our project. Honesty reminds me to state that we didn't invent any of these ideas: most of them came from Mike Cohn's excellent books User Stories Appliedand Agile Estimating and Planning.

Handlers planning their dog's route round a Dog Agility courseTelling Stories

My first job, once we decided to go Agile was to fill up the Product Backlog. This is our wish-list of everything we would like to go into the product at some point, though not necessarily in release one. It contains a whole bunch of User Stories, which are concise descriptions of pieces of functionality that a user would like to be in the software.

There's no IEEE standard for User Stories, and that's a good thing, because they're meant to be written either by the end users of the software, or at least as if the users of the software had written them. How many IEEE standards do you know that can be implemented by your clients?

But don't panic: just because there's no standard, doesn't mean there's no help. We followed along with Mike Cohn's suggestion of writing User Stories in the form "As [some kind of user], I want [something] so that [I get this benefit]". In some cases we went on to record some high-level details about how the feature might work, but nothing more than a few sentences. User Stories are supposed to be placeholders for conversations that we'll have with our users (or pretend users) nearer the time when we implement the feature. I found the acronym INVEST helpful: User Stories should be Independent, Negotiable, Valuable, Estimable, Small, Testable.

In our Backlog we've got stories like "As a User, I want to be able to reset my password myself if I forget it, so that I don't get shouted at by the Administrator" and "As a Sales Manager, I want to be able to issue new license keys to customers so that I can make more sales and get a bigger bonus".

We'd already started along the well-trodden road of writing functional specifications before we were lured in a different direction by Agile, so creating our Product Backlog was mostly reverse engineering: working out what reason a user would have for needing each piece of functionality that we'd specified. This was a useful exercise in itself, as I could make sure that all the features had a reason for being other than "Wow! That would be cool to program."

Another time, I'd probably build up the Backlog starting with a Project Charter, or a high level overview of what we want to achieve, and then using a mind-mapping technique to break this down into stories.

Playing at Estimating

So now we know what our software's going to look like. Can we get it done by next week, as the boss wants? The first step in answering that is deciding how big the project is, and given our Product Backlog, the best way of measuring that is by sizing the individual stories. In fact, we don't even need to calculate an absolute size, just a measurement that ranks stories against each other.

For this reason we chose to measure size in Story Points. This isn't a unit that ISO can help you with; each team will have its own definition of a Story Point, which should emerge naturally through the course of the project. We could have chosen to measure in Ideal Days (days assumed to be without distractions and interruptions), but we again heeded our virtual mentor's advice that this would slow us down, as we'd start to think too much in terms of individual tasks, rather than how a particular story compares in size with others in the list.

The one problem with using abstract units like Story Points is deciding how big one is. We solved that problem by scanning though the list and picking a story that looked pretty small, and a story that looked fairly large and assigning them a 1 and an 8 respectively. We then measured other stories up against these two.

The other thing we agreed on was a sequence of "buckets" for holding stories of different sizes. For small stories, it's relatively easy to agree whether they differ by 1 or two points of complexity; but as features get bigger, it also become more difficult to estimate precise differences between them. So we created buckets with sizes of 1/2, 1, 2, 3, 5, 8, 13, 20, 40, 60, 80, 100 Story Points each (you might recognise that as a kind of rounded Fibonacci sequence); the agreement was that if a story was felt to be too big to fit in one bucket, then it would have to go in the next one up. A story that was felt to be bigger than an 8, for example, would have to be assigned to the 13 bucket.

Planning Poker CardsWith that sorted, we were ready to play Planning Poker. I made my own cards (in Word 2007) each showing the size of one of the buckets, one deck for each developer. If you want to play along, but don't like my cards, you can buy professional decks from a couple of sources. The "game" works like this.

We'd pick a Story and discuss it. We had all the people in the room we needed to make sure that questions about the scope of the story got answered. We then had a moment of quiet contemplation to come to our own individual conclusions about the size of the story, and to pick the appropriate card from our hands. Then, on the count of three, everybody placed their card on the table. If everybody had estimated the same, great! we recorded it in our planning tool (VersionOne). If not we talked some more. What made Simon give the story a 5, while Roman gave it 1? Then we had another round of cards - or sometimes just negotiated our way to an agreement.

It felt a little strange at first, but soon became quite natural. It's amazing how liberating it is to work by comparing stories, rather than by hacking them into tasks. We had a backlog of about 130 stories, and it took us just under three sessions of 4 hours each to get through the list - not bad for a first go, I thought.

The final thing we did was to triangulate: to go through all the stories that we'd put in a particular bucket and to make sure that they truly belonged there. Was this story packed so full of work that it was flowing over the top of the bucket? Move it up a bucket. What about that one, huddled down in the corner? That would surely fit in the next bucket down?

Self-adjusting estimates

It was tempting, back when we had a Backlog, but no sizes assigned to the Stories to jump straight to the stage of estimating a duration for the project. But that would have bypassed one of the big benefits of using Story Points: self-adjusting estimates.

Imagine you were going on a journey, and you didn't have Google Maps to help you plan. One way of estimating your journey time might be to look at all the cities (or motorway junctions, or other way-points) that you have to pass and guess at the time needed to travel between each. Now suppose you've set off on your journey and travelling the first few stages has taken longer than expected. The children are in the back of the car chorusing "are we there yet?". "No", you say. "How long?" they ask. And what do you tell them? You'd have to work out the journey times for the remaining way-points and apply some kind of scaling factor in order to give them an answer. But you don't do it that way. Do you?

Instead, before you set off, you calculate the total distance. Then, as you're driving you guess at your average velocity. At any time you can divide the remaining distance by the average velocity to give a fairly good estimate of when you'll arrive, that, because you've used historical data, automatically factors in things like, how overloaded the car is and how bad the traffic has been. If your kids have their lawyers on the phone ready to sue if you don't arrive exactly when stated you can even use your minimum and maximum velocity to give them a range of times between which you might arrive.

And so it is with Story Points. They say nothing about duration: they are simply a measure of size - like using distance as the first step in estimating journey times. Velocity is a measure of how many Story points you complete in an iteration. Estimating the duration of the project is then as simple as dividing remaining Story Points by velocity, and multiplying up by the iteration length.

But we haven't completed an iteration yet, so how do we know what velocity to use? If we had done a similar project using agile we might be able to apply historical values. This might be the case when we're working on version 2 of our product. But for now, we need to go with a forecast of our velocity.

We started by estimating how many productive hours we would have from all developers during an iteration (of three weeks in our case). Industry veterans reckon on up to 6.5 productive hours in an eight-hour working day, though we're still debating this in our company.

Then we picked a small sample of stories of each size from our backlog, trying to include a mix of those that focused on the UI as well as those that mainly concerned the server. Breaking these stories into tasks gave us an indication as to how long each story would actually take. We made sure to include every kind of task that would be needed to say that the story was really and truly done including design, documentation, coding, and testing of all flavours. Finally we imagined picking a combination of stories of different sizes so that we could finish each of them in the time we had for an iteration. Adding up the Story Points the stories in the combination gave us a point estimate of velocity.

We would have been foolish if we'd gone forward using just that estimate. So we took advice from Steve McConnell's Cone of Uncertainty (which shows how far out an estimate is likely to be at each stage in a project) and applied a multiplier to our point estimate of velocity to get a minimum and maximum. Since this was getting too much for our little brains to handle, we fired up Excel, and made a pretty spreadsheet of the minimum, expected and maximum number of iterations that we predict the project will take.

Unexpected Benefit

The final estimate was far larger than we'd expected (who is surprised?). So we needed to par back the scope. If we were basing estimates on a functional specification we could have cut whole features, but it would have been quite difficult to cut parts of a dialog box. But since we were using Stories, we were able to remove them from the release plan, then, simply by subtracting their Story Point estimate from the total, have an easy way to see the impact on the overall project.

At the end of all that, I'm happy. My feeling is that it is the most robust estimate and plan of any I've produced, and for once, we'll have a trivial way of reliably updating the plan as we see how we're getting on.

Thursday, 25 September 2008

Hooking up Commands to Events in WPF

A question came up on the Stack Overflow website a little while back about how to execute a command when an event is raised. The questioner, Brad Leach, is using a Model-View-ViewModel architecture for WPF (of which I'm also a fan) and wanted to know how he could hook up a Command on his ViewModel to the TextChanged event of a TextBox in his View. In his case, it turned out that he got what he needed by Data-binding the Text property of the TextBox to a property on the ViewModel; in my answer to his question I outlined an approach that can be used in other cases where properties are not involved, so that the data-binding trick doesn't work. Now I've gone one better, and created a a utility class, EventBehaviourFactory, that makes it easy to hook up any WPF event to a Command using Data Binding.

Using EventBehaviourFactory you define new attached properties, one for each kind of event you want to hook a Command to. Then you set the value of the property on a WPF element, specifying the command that you want to be executed. When the event is raised, the command will be executed, with the EventArgs being passed through as the command parameter.

Using the EventBehaviourFactory

First of all, you need to include my EventBehaviourFactory class in your project. You can either copy the code from the bottom of this post, or get the whole project from MSDN Code Gallery.

Then you need to define a static class to hold a new Attached property that you will attach to an object to specify which command to execute when a particular event is raised. You need to call EventBehaviourFactory.CreateCommandExecutionEventBehaviour and pass it the routed event that you want to handle - note you must pass the static field holding the RoutedEvent object (usually the same name as the event itself, but ending in "Event") not the event property. Just as when creating a standard attached property, you also need to pass the name of the property as a string (this should match the name of the static field that you store the DependencyProperty in), and the type of the class that is defining the property.

As an example, I'll create a property that will execute a Command when the TextChanged event of a TextBox is raised:

using System.Windows;
using System.Windows.Controls;
using System.Windows.Input;

namespace FunctionalFun.Wpf
{
    public static class TextBoxBehaviour
    {
        public static readonly DependencyProperty TextChangedCommand = EventBehaviourFactory.CreateCommandExecutionEventBehaviour(TextBox.TextChangedEvent, "TextChangedCommand", typeof (TextBoxBehaviour));
        
        public static void SetTextChangedCommand(DependencyObject o, ICommand value)
        {
            o.SetValue(TextChangedCommand, value);
        }

        public static ICommand GetTextChangedCommand(DependencyObject o)
        {
            return o.GetValue(TextChangedCommand) as ICommand;
        }
    }
}

The important part is in line 9, where I'm calling EventBehaviourFactory. This is the part that creates the attached behaviour. In the download, I've included a Resharper template to mostly automate this step. To make use of the property you just set a value for it on the object whose events interest you, binding it to the appropriate command. Here's an example:

<Window x:Class="FunctionalFun.Wpf.Window1"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:ff="clr-namespace:FunctionalFun.Wpf"
    Title="Window1" Height="300" Width="300">
    <Window.DataContext>
        <ff:ViewModel/>
    </Window.DataContext>
    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition/>
        </Grid.RowDefinitions>
        <TextBox ff:TextBoxBehaviour.TextChangedCommand="{Binding TextChanged}"/>
    </Grid>
</Window>

The ViewModel that this Window uses (initialised in line 7) is:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Input;

namespace FunctionalFun.Wpf
{
    public class ViewModel
    {
        public ICommand TextChanged
        {
            get
            {
				//  this is very lazy: I should cache the command!
                return new TextChangedCommand();
            }
        }

        private class TextChangedCommand : ICommand
        {
            public event EventHandler CanExecuteChanged;
            public void Execute(object parameter)
            {
                MessageBox.Show("Text Changed");
            }

            public bool CanExecute(object parameter)
            {
                return true;
            }
        }
    }
}

How EventBehaviourFactory works

EventBehaviourFactory uses the attached behaviour technique. Basically, it creates new Attached properties, one for each kind of event that you ask it to handle. The attached property lets you specify which command you want to be executed when the event is raised. The attached property is configured with a PropertyChanged handler (wrapped up in the private ExecuteCommandOnRoutedEventBehaviour class) that starts listening to the appropriate event on an object whenever the attached property is set on the object - and stops listening if the attached property is set to null.

As you read the code below, you might wonder why I have factored out the ExecuteCommandBehaviour class. That was because I was trying to include a way of handling non-routed events in this Factory as well. I ran into a couple of problems, however, so I took it. I'm not sure how useful it would have been anyway, because I couldn't find terribly many non-routed events in WPF. If this would be useful to anybody, let me know, and I'll have another stab at it.

The whole project is available on MSDN Code Gallery. There are a couple of example properties in there, and even a trio of unit tests. As a bonus, you get a Resharper template

using System;
using System.Windows;
using System.Windows.Input;

namespace FunctionalFun.Wpf
{
    public static class EventBehaviourFactory
    {
        public static DependencyProperty CreateCommandExecutionEventBehaviour(RoutedEvent routedEvent, string propertyName, Type ownerType)
        {
            DependencyProperty property = DependencyProperty.RegisterAttached(propertyName, typeof (ICommand), ownerType,
                                                               new PropertyMetadata(null, 
                                                                   new ExecuteCommandOnRoutedEventBehaviour(routedEvent).PropertyChangedHandler));

            return property;
        }

        /// <summary>
        /// An internal class to handle listening for an event and executing a command,
        /// when a Command is assigned to a particular DependencyProperty
        /// </summary>
        private class ExecuteCommandOnRoutedEventBehaviour : ExecuteCommandBehaviour
        {
            private readonly RoutedEvent _routedEvent;

            public ExecuteCommandOnRoutedEventBehaviour(RoutedEvent routedEvent)
            {
                _routedEvent = routedEvent;
            }

            /// <summary>
            /// Handles attaching or Detaching Event handlers when a Command is assigned or unassigned
            /// </summary>
            /// <param name="sender"></param>
            /// <param name="oldValue"></param>
            /// <param name="newValue"></param>
            protected override void AdjustEventHandlers(DependencyObject sender, object oldValue, object newValue)
            {
                UIElement element = sender as UIElement;
                if (element == null) { return; }

                if (oldValue != null)
                {
                    element.RemoveHandler(_routedEvent, new RoutedEventHandler(EventHandler));
                }

                if (newValue != null)
                {
                    element.AddHandler(_routedEvent, new RoutedEventHandler(EventHandler));
                }
            }

            protected void EventHandler(object sender, RoutedEventArgs e)
            {
                HandleEvent(sender, e);
            }
        }

        internal abstract class ExecuteCommandBehaviour
        {
            protected DependencyProperty _property;
            protected abstract void AdjustEventHandlers(DependencyObject sender, object oldValue, object newValue);

            protected void HandleEvent(object sender, EventArgs e)
            {
                DependencyObject dp = sender as DependencyObject;
                if (dp == null)
                {
                    return;
                }

                ICommand command = dp.GetValue(_property) as ICommand;

                if (command == null)
                {
                    return;
                }

                if (command.CanExecute(e))
                {
                    command.Execute(e);
                }
            }

            /// <summary>
            /// Listens for a change in the DependencyProperty that we are assigned to, and
            /// adjusts the EventHandlers accordingly
            /// </summary>
            /// <param name="sender"></param>
            /// <param name="e"></param>
            public void PropertyChangedHandler(DependencyObject sender, DependencyPropertyChangedEventArgs e)
            {
                // the first time the property changes,
                // make a note of which property we are supposed
                // to be watching
                if (_property == null)
                {
                    _property = e.Property;
                }

                object oldValue = e.OldValue;
                object newValue = e.NewValue;

                AdjustEventHandlers(sender, oldValue, newValue);
            }
        }
    } 
}