Wednesday, 31 December 2008

var content = FunctionalFun.Articles[2008].Top()

We come to that time of year when bloggers traditionally (it only takes twice to make a tradition!) indulge in a little self-promotion to drive traffic to under-appreciated by-gone posts. And who am I to dismiss the collective wisdom of bloggers around the sphere? So here it is: The Best of Functional Fun 2008

It's always strange looking back on a year just gone. Coming out of the Church Watchnight Service at 00:00 hours on the 1st January 2008 I had no idea that the coming year would see me start a blog, travel half-way round the world to a conference or undertake the expansion of our family (note to my wife: order of appearance in this list does not indicate order importance).

I launched the blog with a series of posts on Project Euler, publishing solutions for 25 problems so far. According to Google Analytics and Outbrain, the most popular posts in this thread were:

It wasn't long before I diversified. I felt the need to try my hand at more meaty topics, so I tackled the subject of tail-recursion and trampolining (a cunningly placed Wikipedia link keeps this article in the top five!). Just thinking about such energetic-sounding subjects tired me out, so I wrote a follow up article, adding the term Lazy Trampolining to the lexicon.

LINQ has proved a great source of inspiration. Early on, I hitched a rid on the LINQ-to-* bandwagon with my articles on LINQ-to-Console and LINQ-to-Reflection. A few months later, my two-parter on Reporting Progress during LINQ queries and Cancelling Long-running LINQ queries proved very popular, even appearing briefly on the Amazon.com site when Marco Russo linked to it on the blog for his Introducing LINQ book.

Given my love for things new and shiny, it is inevitable that the the W*Fs made an appearance. The second-most popular article on the site is my walk-through of how to set up a test X.509 (alias SSL) certificate for WCF - a procedure that isn't as obvious as something that important ought to be. Interestingly, the top article on my blog is a helper class I wrote to work around a limitation in WPF's PasswordBox (Microsoft take note!): I show how to Databind to its Password property. Also popular (ranking #3) is my post kicking off what I warned would be a sporadic series on How to Create a Gantt Control in WPF; so sporadic that in four months only Part 2 has appeared!

As unpleasant as the realisation is, I often have to remind myself that my job isn't all about programming. We actually have to make money working by solving customers problems (I wrote about some of my projects back in July). Project Management is what programmers need to make them profitable, and in the last year I've come to think that Agile techniques (see my take on Agile) are the sugar needed to make the Project Management medicine go down. The post where I described how we're doing Agile planning generated the biggest spike in my traffic stats (thanks to a link from DZone) and the most nagging about when I'd follow it up with a post about how the theory plays out in practice (I'll get there soon Rob!).

And what would a blog be without a little gentle humour? Some may suggest that you look at this site to find out - but I tried as hard as my geeky sense of humour permitted:

With that, it just remains for me to wish all my readers a prosperous and Functional new year, and to thank you all for the encouragement by way of ratings, comments and emails. Happy New Year!

Monday, 29 December 2008

5 Stars for Outbrain

Just a little note to say a public thank you to Outbrain who provide the Ratings widget at the bottom of each post on this blog. I've received truly excellent customer service from them, though they don't charge a penny, so I figured this would be a nice way to repay them in kind.

As I mentioned before, this blog has been suffering a Denial of Kudos attack, even on Christmas Day. When I raised this issue on the Outbrain Feedback site (which they host on the innovative getsatisfaction.com) they responded very quickly and sorted the problem out as far as they could.

But the moment they truly shone was Christmas Day. I'd posted in the morning about the latest crop of spam ratings (I managed to get on the laptop with an excuse to my wife about making sure my brother got his Amazon gift vouchers ;-) ). That evening I checked my mail (I've got Gmail on my phone, so it's easier to do undetected!), and found a response from Kate at Outbrain to say that they'd been through and cleared out the illegitimate ratings, which I know is a time consuming manual job. That was beyond the call of duty, Kate. Thanks!

As an aside, whenever I invoke the name Outbrain in a post, it usually brings a swift response from them via the comments. I assume you Outbrainers aren't subscribed: are you using some kind of universal blogosphere monitor?

P.S. Kate, they've been at it. Another 20 spam ratings yesterday. I'm wondering if they're enjoying the notoriety: I'll try shutting up about them and see if that dries them up.

Thursday, 25 December 2008

An unoriginal Christmas gift

Happy Christmas to you all.

Do you know what my Christmas present was? Another gift sent via Outbrain:

Outbrain

At least the colours are seasonal!

Tuesday, 23 December 2008

Denial of Kudos Attack on my blog

A couple months ago I set about trying to garner a few sympathy votes for my blog by complaining about the low star ratings that many of my posts were receiving. And as a demonstration of how not to carry out a controlled experiment I changed the ratings widget at the same time, from the Blogger default to Outbrain. One or other of these measures succeeded and my ratings improved no end.

Over the last few months I've been getting pretty good ratings, at a rate of about 35 a month, mostly 4 or 5 stars. It makes my heart glow rosy pink when I think that people have appreciated my work enough to rate it.

Then at the beginning of December my blog received some unwelcome attention. Logging onto Outbrain one evening, I noticed a crop of about 20 ratings, all sprung up since that morning. Odd, I thought, since that's half my usual monthly quota. And they were all angrily red - 1 star ratings - rather than the glowing golden fours and fives I'd been attracting. The next thing that stood out was that they were all from the UK (Outbrain logs IP addresses with each rating, though it only shares country of origin - it would be useful to see more details), and all posted within the space of a few minutes, faster than anybody could click through the posts. Alarmed, I checked the visitor logs on Feedburner and Google Analytics: tellingly, there were no corresponding visits logged from the UK. My conclusion? This bore all the hallmarks of a Denial of Kudos attack.

But that was just the beginning. Since then, I've racked up over 360 ratings, 1000% up on my usual tally. Most from the same place (so far as I can tell), all the same damning rating. It doesn't happen continuously; usually a block of 40 or so "spam" ratings will appear each day over a weekend, then nothing for several days.  I'm at a loss to know who or what's behind it. It's almost as if someone is monitoring the average rating, then shooting single star ratings at the blog to bring down the score if ever it rises above some mark.

Kate at Outbrain confirmed to me that it does appear to be a bot that's posting these ratings, running in an environment without cookies, which is how it evades the usual guards against multiple ratings. A few other UK blogs have been targeted, but why mine should be among them, I don't know. Outbrain manually cleared up the first hundred or so spam ratings, bringing my average ratings back up again. But more have appeared since then, which explains the current lacklustre scores. I have been promised a second spam clear out, but it has yet to happen.

Is this the handywork of some script-kiddie? Or somebody more malicious? It's definitely a very high-tech way of forestalling any pride I might experience in my work.

Anybody else suffered anything like this?

Saturday, 13 December 2008

Announcing a new family development project

The family Jack is proud to announce a new project, code named Baby Jack 2.0.

My wife and I decided that the Agile methodology would not be appropriate in this instance, so we will be following the traditional Waterfall pattern. Requirements analysis has been completed (boy or girl will be just fine; complete set of appendages is a must). For the Functional Specification and Design stage we were fortunate in having complete blueprints that we could appropriate for ourselves - reverse engineering DNA is beyond my current skillset, even taking into account the potential of C#4.0. Build is now well underway: after 12 weeks we have a complete implementation, and we're just working on scaling the product up and out. We aim for a single delivery in the June timeframe, based on current progress.

An early screenshot is shown below:

BabyJack2-0Scan_12_08_2008

Monday, 1 December 2008

Typealyzer says I'm a Scientist

I submitted my blog for a personality test today, and I suppose, since it didn't write itself, the result must apply to me. According to Typealyzer, I am a Scientist:

The long-range thinking and individualistic type. They are especially good at looking at almost anything and figuring out a way of improving it - often with a highly creative and imaginative touch. They are intellectually curious and daring, but might be pshysically hesitant to try new things.

The Scientists enjoy theoretical work that allows them to use their strong minds and bold creativity. Since they tend to be so abstract and theoretical in their communication they often have a problem communcating their visions to other people and need to learn patience and use conrete examples. Since they are extremly good at concentrating they often have no trouble working alone.

What are you?

Thursday, 27 November 2008

Conquering VB envy with C#4.0

Whilst your average C# programmer probably considers himself superior to those poor people who habitually use VB.Net, he is at the same time silently envious of one or two (but certainly no more than that!) features of the deluded ones' language. Concise, hassle-free, and above all, Type.Missingless COM Interop for one. And deep XML integration for two. But with C#4.0, and Anders' masterly introduction of objects that can be statically typed as dynamic, that will change. Gone will be all envy; in its place, pure pity. ;-) Here's one example for why.

I finally found time (we've just finished our iteration), and an excuse (I've got to give a briefing on PDC), to delve into my smart black USB hard disk ("the goods") that I brought back from the PDC, and to fire up the Virtual Hard Drive where lives Visual Studio 2010 and C# 4.0. As an aside, I'm running it under VirtualBox, rather than Virtual PC, without any difficulties at all, and with very good performance.

In Ander's presentation about the Future of C# he showed a simple example of how the dynamic features of C# 4.0 could be used to implement a property bag, where you could write (and read) arbitrary properties on the bag and it would store whatever you assigned to it. The punch line was that he implemented it in a couple of lines of code.

That gave me the idea that it shouldn't be too difficult to simulate something like VB.Net XML integration, where you can take an XML object, and access its attributes and elements as if they were properties of the object. And it wasn't. Now don't get too excited: I spent about fifteen minutes on this, but the outcome should be enough to whet your appetite.

First, the end-result:

static void Main(string[] args)
{
    var xml = "<Pet Type='Cat' Name='Leo'><Owner Name='Sam' Age='27'/></Pet>";
    dynamic dynamicXml = new DynamicXElement(xml);

    Console.WriteLine("Name={0}, Type={1}", dynamicXml.Name, dynamicXml.Type);
    Console.WriteLine("Owner={0}, Age={1}", dynamicXml.Owner.Name, dynamicXml.Owner.Age);
    Console.ReadLine();
}

In line 4 I'm creating one of these new-fangled dynamic object thingies, declaring it with the magical dynamic keyword. The dynamic keyword tells the compiler to do all member resolution on the object at run time, rather than compile time. And how does it do the member resolution? That's where the DynamicXElement comes in. The DynamicXElement is equipped with the right knobs and levers so that it can participate in dynamic resolution. It must be tremendously complicated then? You'll see in a minute.

Lines 6 and 7 show off the amazing capabilities of this DynamicXElement. Having supplied it with some XML when we initialised it, we can now access values of attributes just as if they were properties of the element. Likewise with child elements and their properties. Isn't that exciting? Doesn't it deserve some applause (as the more desperate of the PDC presenters might say!)?

So how does dynamic binding work? I won't pretend to you that I understand it fully at the moment. What I do know is that if you want to have a say in how member resolution works, your objects have to implement IDynamicObject. I also know that in .Net 4.0 there will be a base implementation of that interface called DynamicObject (follow the link to get the code for it); if you inherit from that it takes care of the minor complications for you, leaving your code to be as simple as:

class DynamicXElement : System.Dynamic.DynamicObject
{
    XElement _xml;

    public DynamicXElement(string xml)
    {
        _xml = XElement.Parse(xml);
    }

    public DynamicXElement(XElement element)
    {
        _xml = element;
    }

    public override object GetMember(System.Scripting.Actions.GetMemberAction info)
    {
        var attribute = _xml.Attribute(info.Name);

        if (attribute != null)
        {
            return attribute.Value;
        }

        return new DynamicXElement(_xml.Element(info.Name));
    }
}

As you can see, I only need to override one method, GetMember, to give me the control I need of dynamic behaviour (there are other overrides available if you want to handle calls to property setters or methods). This method is called whenever the runtime wants to resolve a call on a property getter. The GetMemberAction object that is passed to the method contains a Name property giving, unsurprisingly, the name of the property that's being asked for. In my ultra-simple implementation, I'm using that to first check for the existence of an attribute with that name; or, failing that, assuming that it must be the name of a child element. I'm returning the child element in a DynamicXElement wrapper so its attributes and children can be accessed in the same dynamic way.

Simple wasn't it?

But please look gently at this code. I know that it will fall to pieces in anything approaching a real-world situation (like when an element has an attribute and a child element with the same name). If you want to see some more substantial examples, look at Nikhil Kothari's blog. Here's his dynamic JSON wrapper, for instance.

Friday, 21 November 2008

Null Reference checking, functional style

I love Extension methods. I only wish I could pull one out of my computer, so that I could give it a great big hug. Anyway ... That little outburst was prompted by the way that extension methods helped me out with an elegant solution when I needed to do a null reference check on a variable.

We've all written code like

variable = SomeMethodThatMightReturnNull();

if (variable == null) 
{
	throw new ExceptionIndicatingTheNullness();
}

which is alright, but don't you think there are too many curly braces there? You don't? Well, I suppose there are only two, but wouldn't it be more elegant if there were none?

I've written before about a nice feature of extension methods: because they are effectively static methods you can call one on a null reference, and it doesn't blow up. Instead it just passes the null value through into the method.

So we can create this:

public static class FluentMethodExtensions
{
    public static T EnsureNotNull<T>(this T instance, Func<Exception> exceptionBuilder) where T : class
    {
        if (instance == null)
        {
            throw exceptionBuilder();
        }
        return instance;
    }
}

(Note the where constraint on the method (line 3): we need that so that we can do the null comparison in line 5.)

With that in place we can zap those curly braces:

variable = SomeMethodThatMightReturnNull().EnsureNotNull(() => new ExceptionIndicatingTheNullness());

How about that then?

Of course, the usual reason for throwing your own custom exception is so that you can include helpful details about the problem. In that case you might want to create some overloads to EnsureNotNull. How about this one, intended to allow you to pass through parameters so that you can build them into a message using string.Format:

public static T EnsureNotNull<T>(this T instance, Func<object[], Exception> exceptionBuilder, params object[] exceptionMessageParameters) where T : class
{
    if (instance == null)
    {
        throw exceptionBuilder(exceptionMessageParameters);
    }

    return instance;
}

You use it like so:

// ...
Range range = GetRange(workbook, rangeName)
                .EnsureNotNull(BuildInvalidRangeNameException, rangeName);
// ...

private ExcelException BuildInvalidRangeNameException(object[] exceptionParameters)
{
    return new ExcelException(string.Format("Named range {0} does not exist.", exceptionParameters));
}

You can create your own. Go on. Try it. It's fun.

Thursday, 20 November 2008

Scary MSBuild log

This is not what you want to see in your MSBuild log:

UndesirableMsBuildLog

This unwelcome tale appeared in my log because I had this Target in the project file:

<Target Name="DeleteOldResultFiles">
 <ItemGroup>
         <LogFile Include="$(ReportsDirectory)\*.*"/>
   </ItemGroup>
   <Delete Files="@(LogFile)" TreatErrorsAsWarnings="true"/>
</Target>

Unfortunately, I had miss-typed the $(ReportsDirectory) property as $(ReportDirectory), and said miss-typed property did not exist, so MSBuild was defaulting to the empty string. Thus the LogFile Item was being populated with a list of all the files in my C:\ drive, and the Delete task was obediently purging them.

The moral of this story: don't run MSBuild as Admin - or if you're as foolish as I was, make sure you've at least got a cloned Virtual machine (as I had) from which to copy any files which might get trashed.

Wednesday, 19 November 2008

New syndrome identified: Pre Posting Tension

A little while back, I wrote that a beloved member of our household is suffering from a rare and incurable condition: Sudden Onset Digital Amnesia. Now I have diagnosed that I myself am afflicted with a syndrome previously unknown to Medical Science: Pre Posting Tension. As my contribution to the greater good of mankind, I will, in this post, catalogue the symptoms of this condition. If you recognise yourself as a fellow sufferer, please get in touch: we may be able to form a support blog.

A cycle of symptoms

First, realise that the symptoms come in cycles. The trigger seems to be the discovery that somebody, anybody, has linked to an article on my blog. This produces a feeling of euphoria. If the link is from a high page-rank site like DZone (thanks mswatcher!) the euphoria is elevated to near ecstasy, increasing with every up-vote received.

The excitement is short-lived, however. Soon after closing the Feedburner Site Stats window bearing the good tidings of incoming links, a great wave of worry and doubt sweeps over me. Will I be able to find material for a follow-up post? Can I again craft new and interesting phrases to describe my future subject matter? Will the visitors return? How many of them will subscribe, and how soon will they unsubscribe if the quality of posts diminishes?

Then the nervousness begins, increasing with every hour that passes unblogged. Glimpsing Windows Live Writer in my Start Menu, or the Blogger icon in my Favourites list causes me to tremble with anxiety. Every line of code written is scrutinised for post potential, every fleeting thought examined for article-worthiness.  Then a plateau is reached when inspiration dawns: my mental state stabilises as words and phrases begin to congregate together in my mind.

Once fingers begin tapping keyboard, tension eases somewhat. But woe-betide anybody who interrupts, because this is when irritability sets in; concentration is total and all else is forgotten as words bed themselves into the page. If body is dragged away from the keyboard, mind remains at work - resulting in responses even shorter than the usual, manly, grunts when questioned. Internal pressure again builds up until the words get a chance to escape onto the page. Then, disaster. My train of thought comes up against a red light: the flow of words dries up. Writer's block has set in. Panic takes hold: visitor numbers will surely be dropping off by now. Only fresh content can restore them, and fresh content is held up in the sidings of my mind.

I fumble for words, and gradually the stream of thoughts begins again. The post rumbles on to completion. I check it over and over again, trying to winkle out the obvious errors that I no are lurking their[1]. Then fresh doubt springs up. What if readers don't like it? What if I've written something senseless? Maybe it doesn't hang together. More often than not, I answer my self-doubt with Pilatean response: "What I have written, I have written", and hit "Publish", before I beat myself up any further.

Relief comes flooding over me, as the post flashes up on my blog. But what's this on my Browser Toolbar? A shortcut to Google Analytics? I wonder whether anybody's read that article yet...

Afterword

My wife proof-read this, and commented that it would be amusing if it wasn't true. My protest that it was all exaggerated for effect met only with a dismissive "Puh"!

Footnotes

  1.  Sic - in case it wasn't obvious from the context!

Friday, 14 November 2008

Passing around references to events (almost)

Have you ever wished that you could pass a reference to an event on an object in C#? Perhaps you wanted to tell another object to monitor an event on something that you own. How do you say that in C#? If you've used C# 3 for any length of time you'll know that, through a magic combination of lambda expressions and Expression trees, it is possible to pass around references to properties and methods in a nice, type-safe, refactoring-proof way. But up till now I've not read of anyway to do the same for events1.

I've been doing some thinking about this recently, and I think I've invented a nice solution that will work in many cases. One case where I needed it was when creating a class to help me unit test events. I wanted an easy way to monitor a class, prod it and poke it in various ways, and then check that it raised the appropriate events. I came up with the EventAsserter. Hopefully I'll be able to share this with you in its full glory before long, but for now, I'll show a prototype as an example of using my event-passing technique.

Getting Hooked

The assumption is that you are calling a method, and want to be able to tell that method to hook up to an event of your choosing on an object that you specify. The basic idea is for you to pass into the method a delegate that allows it to call you back with its own event handler for you to hook up appropriately.

It looks like this:

using System;
using System.Diagnostics;

namespace EventMonitor
{
    class Program
    {
        static void Main(string[] args)
        {
            var eventRaiser = new EventRaiser();
            var eventAsserter = new EventAsserter<EventArgs>(handler => eventRaiser.TestEvent += handler);

            eventRaiser.RaiseEvent();

            eventAsserter.AssertEventOccurred();
        }

        class EventRaiser 
        {
            public event EventHandler<EventArgs> TestEvent;

            public void RaiseEvent()
            {
                TestEvent(this, EventArgs.Empty);
            }
        }

        class EventAsserter<TEventArgs> where TEventArgs : EventArgs
        {
            bool _eventOccurred;

            public EventAsserter(Action<EventHandler<TEventArgs>> attacher)
            {
                attacher((object sender, TEventArgs e) => _eventOccurred = true); ;
            }

            public void AssertEventOccurred()
            {
                Trace.Assert(_eventOccurred, "Event did not occur");
            }
        }
    }
}

It's a very simple example. I've got a dummy class, EventRaiser, that exists only to supply and raise an event. Then there's my prototype EventAsserter. This needs to be a generic type (taking the Type of the EventArgs from the event that you're wanting to handle) so that it knows what kind of event handler to supply: that's what it does in its constructor. It receives a delegate, and it calls this delegate back, passing through an event handler, and it expects that the delegate will hook the handler up to the correct event. Finally, the Main method ties everything together. As it constructs a new instance of EventAsserter  (in line 11) it hooks it up to the Test event of EventRaiser. Then it fires the event, and checks that it was indeed raised.

A Fallback solution for awkward cases

The sharp-eyed amongst you will have immediately spotted the flaw in my cunning plan: this will only work if the events that you are interested in have been declared using EventHandler<T>. This ought to be the case in a lot of code written since .Net 2.0, when the generic EventHandler was introduced. But what if you have events using custom delegates - PropertyChangedEventHandler for example, like the modified version of EventRaiser below?

You could special-case the technique for each type of event you want to handle, creating an overload for each type of EventHandler. Perhaps a nicer solution is to use the adapter pattern:

// ...
var eventAsserter = new EventAsserter<PropertyChangedEventArgs>(
                handler => new PropertyChangedEventAdapter(eventRaiser).PropertyChanged += handler);
// ...

public class PropertyChangedEventAdapter
{
        public event EventHandler<PropertyChangedEventArgs> PropertyChanged;

        public PropertyChangedEventAdapter(INotifyPropertyChanged source)
        {
            source.PropertyChanged += HandleEvent;
        }

        private void HandleEvent(object sender, PropertyChangedEventArgs e)
        {
            var handler = PropertyChanged;
            if (handler != null)
            {
                handler(sender, e);
            }
        }
}

Or, you could consider this solution below: it's less elegant, and encapsulation takes a bit of a beating, but workable as far as I can see.

using System;
using System.Diagnostics;
using System.ComponentModel;

namespace EventMonitor2
{
    class Program
    {
        static void Main(string[] args)
        {
            var eventRaiser = new EventRaiser();
            var eventAsserter = new EventAsserter<PropertyChangedEventArgs>(
                asserter => eventRaiser.PropertyChanged += asserter.HandleEvent);

            eventRaiser.RaiseEvent();

            eventAsserter.AssertEventOccurred();
        }

        class EventRaiser : INotifyPropertyChanged
        {
            public event PropertyChangedEventHandler PropertyChanged;

            public void RaiseEvent()
            {
                PropertyChanged(this, new PropertyChangedEventArgs(""));
            }
        }

        class EventAsserter<TEventArgs> where TEventArgs : EventArgs
        {
            bool _eventOccurred;

            public EventAsserter(Action<EventAsserter<TEventArgs>> attacher)
            {
                attacher(this);
            }

            public void HandleEvent(object sender, TEventArgs e)
            {
                _eventOccurred = true;
            }

            public void AssertEventOccurred()
            {
                Trace.Assert(_eventOccurred, "Event did not occurr");
            }
        }
    }
}

In this case the EventAsserter has to make its handler public. Then in its constructor it passes itself to the attacher delegate. The calling code then needs to know that it has to hook the HandleEvent method to the event that it is interested in. Not pretty, I know; but pragmatic at least. It just goes to show the benefit of using EventHandler<T> over defining your own custom delegates.

What do you think? Can you see yourself making use of this? Any refinements that you can suggest?

Footnotes

1. If you try assigning a delegate to an event within a lambda expression that is converted to an Expression tree you get the error "An expression tree may not contain an assignment operator".

Thursday, 6 November 2008

Careers Presentation - Mathematicians should consider Software Development

Yesterday I was invited back to Birmingham University, from whence I graduated five years ago, to give a careers presentation. I obviously haven't moved around regularly enough to get off the mailing lists! They wanted me to talk to the students about how my Maths degree is helping my career in software development.

Here's an only slightly edited version of what I had to say:

"I was in Los Angeles at a conference last week. You might have heard about this conference in the news: It was the one where Microsoft announced the next versions of  Windows. I wish I had time to tell you about the cool stuff I saw: Windows 7, Windows Azure, Oslo, Dublin, and a whole bunch of other code-names; but I haven't.  I mention it in only in passing because that was how I came to use Skype for the first time ever, talking to my wife and daughter back here in England. I might be a geek,  but I am a bit slow when it comes to getting the latest gadgets.

Skype is amazing: it's easy to setup, no need to remember a phone number – I just had to enter my wife's name to connect with her,  sound quality was crystal clear. And even when you call from the States, it's free - so I bet you students have all used it? Have you ever wondered how they did that? How they make Skype work? Or maybe it is that you've played a game on the Wii, or Playstation 3, and wished that you could create something like that.

If you become a Software Developer you might just get the chance to do just that: to work on the next Facebook, the next Ebay, the next Skype. Or maybe you won't work on anything so glamorous, but for your customers, whoever they are, your work might be just as significant - even more important - in getting their work done. That is part of the privilege and responsibility of being a developer: you can make the difference between a good day and a bad day for whoever is using your software.

My Background

I started working at Paragon five years ago - one week after my honeymoon, and two weeks after my final fourth year presentation. I started as a Junior Developer, responsible for implementing software that somebody else had designed, with a lead developer taking charge of all the project management. Over time I was given more responsibilities: like talking to clients to find out what they wanted their software to do, and leading other people in designing it. In the last few years I was doing more and more of the project management, and less and less writing computer code - I was getting withdrawal symptoms, and had to start doing projects in the evenings to stay sane! So I asked to be moved back in the other direction: now I am responsible for designing the new product that we're developing, writing code and leading the development team (it’s a huge team -  of two people - including me!). Someone else has taken charge of keeping the project on track.

I'll be honest, and say that my job isn't really very glamorous - I don't jet off to conferences every week. A lot of the time I sit at my desk: much of that time I'm typing away at the keyboard - sometimes I just stare into space - thinking and daydreaming - but all related to the project of course. Depending on the projects I'm working on, I occasionally have to meet up with clients and give presentations about our work, or lead discussions with them about what they want doing. One of the most exciting parts is at the start of a project, when all we have is a blank whiteboard, and we brainstorm together [sorry, that's not politically correct: I should say, we have an idea shower together - hmm, sounds worse!] to decide how we are going to solve the problems that we face. It's can be daunting, but also very satisfying once we've cracked it.

The Joy of Software Development

So what do I like about being a Software Developer:

  • There are always new things to learn. The number of new technologies is increasing at an exponential rate – at least, that's how it feels.  Companies like Microsoft and Google are bringing out new Software, and new Software Development Kits all the time. I do a lot of work with the Microsoft .Net Framework, and when it first came out, I thought that within a few years I could become an expert in using it. Now I reckon that it isn't possible for somebody to become expert in more than a 1/10th. You'll never run out of new things to explore.
  • There are always new problems to solve. Computers are getting more and more powerful, yet easier to program. They are being used to tackle an ever wider variety of challenges. At the conference last week, the head of Microsoft Research showed us just a few examples of things they've been working on. Software like the Worldwide Telescope, that anybody can download: it stitches together detailed pictures from observatories all over the world so that anybody can explore the universe from their armchairs. Even amateur astronomers have been using it to make new discoveries. Other new areas include Computational Biology - decoding the human genome, investigating cures for HIV, all with software; Robotics, Social software - applications like Facebook that scale up to connect millions of people; right down to creating computer games that teach kids how to program.
  • There are opportunities do so something significant, to make a direct impact on peoples lives. Paragon is only a small company, yet I've been able to work on a couple of pieces of software that are used to manage the clean-up operation at a major nuclear site.
  • Software development lets you turn dreams into reality fast. An Architect can imagine a building - but then he has to draw up his plans, and get approval for them; and he can’t complete the project without builders, who'll probably make a mess of his fine design, and could take years to complete the project. With software you can imagine something, and code it up straightway and get immediate feedback - never more so that than with tools we have today, which do the hard parts bits like managing computer memory or putting fancy graphics on screen, leaving you to do the really interesting parts.
How Maths Helps

As you can tell, I think that being a Software Developer makes for a great career. But how did my Maths Degree help prepare me for it?

  • Most importantly, it taught me how to think. Learning mathematics trained me to think precise, logical, step -by-step thoughts: the only kind that a computer understands
  • During my time at Birmingham, I learnt how to solve problems. I remember Problem Workshops - Friday Afternoons, up in the Maths Library. There I learnt how to strip the fluff away from a problem, to distil it down to its essence. I grew adept at looking at a problem from different angles - if we squint at it, does it look like a different problem, one that we do know how to solve?
  • And, of course, there are areas of mathematics that I learnt on specific courses that have proved to be very useful. Computers do a lot of Boolean algebra: And, Or, Not. Knowing your truth tables will stand you in good stead. Graph Theory is another thing that comes up a lot. Perhaps you've already noticed, but Facebook is one big Graph: the people are the nodes, and every time you send a friend request you are inviting that person to create an edge between you. In my job, I've found the stuff I learned about Operations Research to be really helpful. In fact, one of my first jobs was to implement software that used meta-heuristics like Simulated Annealing, Tabu search and Genetic algorithms.
  • Lastly, you'll be glad to know that the projects that they make you write, and the presentations that you have to do all help. It's very strange, but before clients will give you money to do some work, they insist that you explain to them what you are going to do, and why it will be good value for money. And when you've built the software for them, for some reason they want to know how it works and how they should use it. And you never know: in five years time, you might be invited back to give a careers presentation!
What to do next

So if I’ve convinced you that Software Development is something you’d like to look into for a career, what should you do? The main thing is: try it out, get some experience, build up a portfolio of work that you can show to a potential employer. There aren’t many people who would volunteer for you to try out your brain surgery skills on them; and you can’t practice rocket science in your back garden. But anybody with a computer can get experience in being a Software Developer. There’s lots of free software out there to help you. In particular, I would recommend looking at projecteuler.net, a site where there are over two hundred mathematical problems which mostly can only be solved by writing a computer program: it’s an excellent way of building up your skills in the two areas. Then have a look at my blog: blog.functionalfun.net, where I’ve written up some of my solutions – and talk about software development in general.

The other thing I’d recommend is to find yourself a job working as a software developer during your summer holidays. I know that a lot of companies are interested in finding new talent, and offering vacation jobs is one of the best ways to do it; they often have little projects that they want doing, but don’t have time to do the work themselves. Find companies that are doing things you’re interested in, writing them a nice covering email along with your CV explaining all the hobby projects you’ve worked on, the experience you’ve built up, and I’m sure somebody will snap you up. Keep pestering them until they do - you've nothing to loose! It sure beats stacking shelves or working in a call centre.

I did this, and got jobs for all three of my summer vacations, and I got job offers from both the companies that I worked at; Paragon was one of those companies. They even sponsored me during my fourth year, which was very handy.

So, to finish: now is a very exciting time to become a Software Developer. It’s a very fulfilling and worthwhile career, with lots of scope for creativity. Your mathematical training will equip you well, and give you a big head-start. And you don’t need to take my word for it - download the software, and try it out for yourselves."

Resources

As well handing out company pens, and squeezy, stress-relieving light bulbs, I handed out a sheet listing a few useful websites:

Mathematics and software

http://projecteuler.net: Mathematical challenges to solve by writing software
http://blog.functionalfun.net: Samuel Jack’s blog about software development, and solving Project Euler

Software Development

http://www.eclipse.org/: Free Integrated Development Environment for writing Java software
http://www.microsoft.com/express/: Free Integrated Development Environments for creating software for the Microsoft .Net platform.
http://msdn.microsoft.com/en-us/xna/default.aspx: Free platform for creating games for Windows and the Xbox 360

http://stackoverflow.com/: Great site for getting answers to programming questions on all platforms
http://sourceforge.net/: Repository of Open Source Software, many projects welcoming new contributors
http://codeplex.com/: Open Source software for the Windows platform.

http://blogs.msdn.com/coding4fun/: Lots of interesting software projects to try at home
http://www.codinghorror.com/blog/: Great blog about software development

Tuesday, 4 November 2008

Passwords for the Virtual PC images on the PDC 2008 Hard disk

Yesterday I asked Google if it knew the passwords for the Virtual PC images that are on the Hard disk that they gave out at Microsoft PDC 2008. It didn't, so, forthwith, I shall teach it.

Visual Studio 2010 CTP

User Account Password
TFSSETUP 1Setuptfs
Administrator P2ssw0rd
TFSREPORTS 1Reports
TFSSERVICE 1Service

OSLO/Dublin/WF/WCF

Username: Administrator
Password: pass@word1

Update: if you are typing the password on a UK keyboard, you might find that the @ sign needs to be swapped for a " sign.

Friday, 31 October 2008

PDC Day 4: XAML, .Net 4.0, MGrammer, and F#

Has anybody got a handkerchief? sniff, snuffle, wipes away tears. PDC is over. They’ve turned off the wireless. We’ve been kicked out of the conference centre. All my new geek friends are heading home. Will I ever find people that understand me so well ever again? Never mind. It was good well it lasted, and I’ve collected a good bunch of business cards. Perhaps geek friendships work best by email and twitter anyway.

And get this: I’ve got Raymond Chen’s autograph. On his own business card. He even promised to stop by my blog. Better make sure it looks tidy; quick check for inanities: geek royalty might be here any minute. Raymond, if you’re reading this: I’ll stop gushing now! Promise!

XAML

No keynote today, but that meant more time for a whole load of interesting sessions. The first one I attended was on XAML. That’s right: a whole session on eXtensible Application Markup Language. They announced a new version of it, with a raft of new language features, and new XAML Readers and Writers.

The new language features include:

  • The ability to reference other elements by name – independently of anything that the target framework (like WPF) provides
  • Full support for Generic types, everywhere in the language – this is done by using {x:TypeArguments} when defining instances for example
  • Better support for events – including a markup extension that can return a delegate
  • The ability to define new properties of a class within XAML
  • Use of Factory methods to create instances

They’re also introducing a new library, System.Xaml.dll is its name, I believe. In this library will be classes for working with XAML in a much more fine-grained way than the current XamlReader and XamlWriter give us, and with much better performance. Basically they work with a new intermediate format of XamlNodes: kind of like an Object Model for XAML. For example, XmlXamlReader will take an xml file, and return some XamlNodes representing the xaml at a higher level of abstraction than xml. Then an ObjectWriter is used to take those XamlNodes and turn them bit by bit into objects. The cool thing is that you can do the reverse: there’s an ObjectReader that will take an object and return the XamlNodes that represent it, and a XmlXamlWriter  that push those XamlNodes to xml. They’re also making public the BamlReader and BamlWriter classes.

As a demo of all the new stuff, they showed a demo of a whole client-server app written entirely in XAML: WPF UI, Windows Workflow handling the button clicks calling to a WCF service defined in XAML with a Windows Workflow again doing the work. Impressive!

.Net 4.0

After that I went to a session on CLR Futures: basically the foundations of .Net 4.0. The nice feature here, as I already mentioned is the ability to load .Net 4.0 into the same process as .Net 2.0. Now I actually can’t see myself using this feature directly, but I’m sure it will open up a lot of opportunities in other areas, mainly around application addin models; no longer it will it cause a problem if an application loads an addin bound to one version of the .net Framework, and then a second addin needing a newer version; both can happily co-exist.

There are going to be a number of refinements throughout the CLR:

  • Improvements Garbage Collector to reduce the number of times it has to pause the application to do garbage collection
  • Improved Thread Pool to support the new Parallel Extensions framework
  • A new feature that will prevent certain critical exceptions being caught – the ones that indicate a corrupt application that should bomb out as quickly as possible: things like AccessViolation and others.
  • New Profiling APIs that will allow Profilers to attach to running applications – a featured focused on Server scenarios.
  • Managed Crash dumps can be opened in Visual Studio

Two announcements that I found most interesting: they’re going to add tuples to the BCL – we have the F# and Dynamic languages teams to thank for this, and also for the BigInteger class that they’ve finally got right, with the optimisation expertise of the Microsoft Solver Foundation team.

The second one: a new Contract class, and associated features. This is like Assert on steroids. It allows code contracts to be specified within methods. For example, there are the Contract.Requires methods that set up pre-conditions, and the Contract.Ensures method that set up post-conditions. All these conditions are established at the beginning of the method, then as a post-processing step, somebody (not quite clear whether it’s the compiler or the JIT) rearranges the post-conditions to make sure they’re checked at the end of the method. There are also going to be tools that will check a code-base for situations where these conditions might be broken – I’m guessing that Pex is the place to look for this.

MGrammar

After a hastily consumed lunch-in-a-box, I headed for Chris Andersons talk on the new MGrammar language and tooling for building Domain Specific Languages. This was exactly what I was hoping for, and Chris did a brilliant job of showing it off. The topic deserves a blog post of its own, but suffice it to say for now that the Intellipad text editor provides a great environment for developing and testing the DSL, and even provides syntax highlighting for your new language once you’ve defined it. There are also APIs that can be used in your own applications that accept a grammar and a string and will parse it, and provide back a data representation of it. Exciting stuff.

F#

The last presentation of the conference was Luca Bolognase’s Introduction to F#. Luca did a great job of showing the strong points of F#: the succinctness of the syntax and its integration with other .Net languages. The last point he showed really wowed the audience: he had written a real world application that pulled stock prices off the Yahoo finance website and did some calculations with them. The punch-line was when he add one “async” keyword and sprinkled a few “!” symbols, and the whole function became parallelized.

The remains of the day

And that was the end of the conference. But the day held one last surprise for me. On the bus back, I found myself sat next to two guys from the Walt Disney World Company. We got talking about the conference (I doubt anybody on the bus was discussing much else), and when we got back to the hotel they mentioned how they were going to meet up with some colleagues at Downtown Disney, and they invited me to go along with them. We had a great dinner down there, munching over M and F# as well as pizza. So Dave and Kevin: thanks a bunch. It was awesome!

PDC is at an end. Tomorrow I fly home. Then the real fun begins: making sense of everything I’ve heard, and putting it to work.

Thursday, 30 October 2008

PDC Day 3: Microsoft Research

I’ve not given Google Reader much attention over the last few days – I’ve been a bit busy with other stuff – so tonight when I finished up I thought I better pay down some of my aggregator debt: I had something like 450 posts to look through, and my aged laptop stuttered through them rather slowly. Thus I don’t have a great deal of time to give you today’s news before I fall asleepppppppppppppppppppppppppp – oops - at the keyboard: a guy sat in the armchair next to me in the keynote did just that, and his whole screen filled up with zs!

The Keynote

P1020974 They Key note this morning was given by Rick Rashid, Head (or Director, or Vice President or whatever top-flight title they’ve given him) of Microsoft Research. He’s clearly a distinguished guy, even if he did say so himself. He’s been in the same job for 17 years. Before that he worked on various groundbreaking projects, like NeXT OS, which later became MacOS X, and one of the very first networking games, AltoTrek. Since joining Microsoft Research, Rick has led the team that delivered the first version of DirectX,  and other tools that became shipping products.

After its 17 years of growth, Microsoft Research now has more than 800 researchers: that was equivalent to creating a new computer science faculty every year. They have Turing and Field medal winners in their ranks, and more members of the National Academy of Engineering than IBM.

Rick mentioned a couple of interesting areas of research. One was Theorem Proving software. For example, they have developed Terminator: software that is able to prove termination for a very large class of programs. In connection with this, one Microsoft Researcher proved Church’s Thesis, which was an open problem for 50 years.

P1020983Changing up a gear (for energy efficiency), Rick introduced a colleague who talked about the work they were doing with sensor networks. As a demo, they had rigged up the PDC hall with a network of 90 environmental sensors. Live on stage they showed the temperature readings that the sensors were giving, superimposed on a view of the hall from Virtual Earth. The presenter showed fast forwarded through the data, showing the hall cooling at night, then warming up again as the lights were turned on, then even more so in the regions around the doors as attendees streamed in. This kind of information can be used to optimise the use of Air Conditioning in a building, for example. Microsoft themselves are using this to make their new Data Centers more energy efficient. An extension of this is SensorWeb, a web-based Sensor sharing project (all hosted in the Cloud of course) that allows many researchers from all over the world to contribute their own sensor data to a big pot for interesting analysis.

Rick then flicked through some other demos from the Computational Biology arena (Human Genome decoding, HIV research) – there’s even code for this stuff that you can get from CodePlex.

They finished with two cool demos. One was of Boku, a game for Children to teach them how to program. They’ll release a version of it for the XBox later next year. Children can create their own games by putting objects and characters in a world, then visually assigning rules to the things to tell them how to behave. For example, you can drop a couple of apples in the world, then configure a little creature to move towards an apple when sighting it. It looked great.

The other demo was of a future version of the Microsoft Surface device called SecondLight. This one uses some clever materials to allow secondary displays in the space above the surface. They showed Virtual Earth in satellite view display on the surface, then they held a piece of tracing paper above the device, and the street view was projected onto the tracing paper. Cool stuff. It works by using a voltage to control toggle the surface material very quickly between opaque and transparent. While it is opaque the surface display is project; when it is transparent, the image for the secondary display is shown.

The Sessions

On that high note the keynote ended, and I arose from my comfy chair for the last time. I attended Daniel Moth’s excellently presented session on the Parallel Task library, and the features they are adding to Visual Studio 2010 to support it. They announced that the library (which includes PLINQ) will be shipping with .Net 4.0. In Visual Studio there will be two new features to debug Tasks (which are like light-weight threads): the Parallel Tasks window which is a bit like the Thread window, but shows running and scheduled Tasks; then there’s the parallel Stacks window which shows a tree view of all Tasks and their relationships, and the stack for each Task. There’s a good MSDN article on these features.

I spent most of the rest of the day in Oslo sessions. I think the picture is becoming a bit clearer now; I’m going to one last session tomorrow from Chris Anderson to learn about the language for building DSLs (In other news, I added his autograph to my Oslo Modelling book today). After that I hope to blog my impressions of it. In the meantime, you’ll have to content yourselves with Martin Fowler’s analysis!

One last piece of excitement. I filled in my session evaluation forms today, being the good boy that I am. After completing one of them an announcement came up on screen that I’d won a prize. Since it didn’t invite me to Click Here!, but rather to go to the main information desk, I took it seriously, but didn’t hope for more than a tee-shirt. I was actually handed a copy of Windows Vista Ultimate. Now since I already have a spare copy of Vista, I’m inclined to find an innovative way of giving it away. Watch this space!

Wednesday, 29 October 2008

PDC Day 2: Windows 7, VS2010, Office 14 and Oslo

I’m dedicated. While all the other PDCers are still out partying at Universal Studios, I came back to my hotel room in order to bring you the news from the PDC. Or to put it another way, I wasn’t really taken with the chainsaw-wielding zombies lunging at the legs of passing guests, or the ghouls that lurked behind pillars and leaped out to induce a scream. I picked up a free meal at Pizza Hut, including the biggest Funnel Cake I’ve ever seen, and then snuck my way back to the bus, making sure to keep well away from the scaries.

But lets go back to the start. For me, Day 2 of PDC tops Day 1 by some margin. Yesterday’s Keynote by Ray Ozzie on Azure was heavy on marketing but light on the interesting stuff. Today’s all-morning-long keynote was packed with geekness.

Windows 7

The first new thing to be demoed was Windows 7 (that’s not a codename by the way. They’re actually calling it that). Although they’re positioning it as “Windows Vista done right” there’s actually some cool new stuff here – features that should be really useful. New Window Management features for a start. How much time do you waste positioning your windows so that you can see them side by side? Now you can drag them towards the top or sides of the screen, and they will dock – rather like in Visual Studio. “Jumpers” on menu bars is another. These are little application specific tasks that an app can display hanging off its icon in the start menu even before it’s launched.

All the utilities, like Paint and Wordpad, get an overhaul (“we’ve decided we’ll do it once every fifteen years, whether they need it or not”, said Steven Sinofsky). They are all Ribbonified, and Wordpad gains Open XML and ODF support. For developers, there’s the nice feature of being able to mount Virtual Hard Disks (VHDs) from within Windows, and even boot from them. And then there’s finally proper multi-monitor support for Remote Desktopping.

And Microsoft would not like me to forget multi-touch. If you have a touch enabled screen you’ll be able to use multiple fingers to manipulate things. They demoed all the cool zooming and scrolling and gestures stuff that we’ve envied on the iPhone.

Lastly, but not leastly, one that the UK government will surely appreciate: BitLocker encryption for USB memory sticks. I need say no more.

.Net and Visual Studio

Scott Guthrie came on to the stage next, to much cheering and clapping, and it was well-deserved. He brought news of a new set of controls for WPF being released to web today. Amongst those going live are the DataGrid, a new DatePicker, a Calender control, and the Visual State Manager that has been ported from Silverlight. All these can be found on Codeplex. They’ve also released a CTP of an Office 2007-style Ribbon Control and RibbonWindow that they’ve been promising for a little while; this is apparently to be found on the Office UI site, but I couldn’t see it!

On the .Net 4.0 front, the Gu-ru announced that CLR 2.0 and 4.0 will run side by side in process. This is good news for Addin developers, and may also hold promise for those that want to develop shell extensions in .Net. There will also be improvements for WPF such as DeepZoom integration (bringing parity with Silverlight) and improved Text rendering. Other than that, they’re being rather vague. I went to a whole presentation on WPF Futures later in the afternoon, only to discover that they didn’t really have any firm plans they wanted to talk about beyond the controls they’ve already announced.

The most exciting news in this area is that Visual Studio 2010 is going to be rewritten to use WPF and managed code. They’ll be making use of the Managed Extensibility Framework (MEF) to allow anybody to create extensions to it. As an example, Mr Guthrie created a new ScottGu mode for code comments. He used managed code to write an extension to the text editor that displayed the xml comments above a method in a rich WPF view – including Bug numbers formatted as hyperlinks that could be clicked to see full details.

Office 14

The big news for Office is that there will be new web based versions of Word, Excel and One Note, written in Silverlight. These will allow collaborative editing of documents, with automatic synchronisation of changes when multiple users are editing a document. They demoed a spreadsheet being edited in the browser, complete with charting and formulae support. The UI looks much the same as in the Desktop, because they’ve created a Ribbon control for Silverlight.

Oslo

Oslo, for me, was the wildcard of the PDC. It sounded exciting, but would it be useful? I was expecting that this would feature in a keynote, especially when I saw that Chris and Don were scheduled for a slot in Hall A. But they spent that time doing a live coding exercise in Azure which was interesting, but not what I hoped for. Instead we had to wait till the afternoon to discover what it is all about. And I’m still not quite sure!

What I saw this afternoon was a new language called “M”. This M allows data to be defined and stored in a database (a repository, as they call it), and allows queries to be written on top of this data in a strongly typed fashion. Don Box used the analogy that M is to T-SQL what C is to assembler. The idea, it seems, is to make it very easy to write applications that are configured, and even driven by data. One example might be a business rules application, where the rules are written in M and pushed into the repository. The application can then query the repository, and determine how it is supposed to behave.

Another component to Oslo is another language galled MGrammer. MGrammer is a language for defining Domain Specific languages: in fact, the M language itself is defined using MGrammer. MGrammer allows the syntax of a language to be defined (in a way similar to ANTLR, if you’ve ever used that) along with projections that map from the DSL to M so that your DSL can then be interpreted by querying the repository.

There’s a nice text editing tool for this (IntelliPad), and a graphical tool (Quadrant) as well, though I’ve not seen that yet. Everything, including the compiler, is in managed code, and it is all highly extensible.

I will freely admit that, at the moment I only have the edge pieces of the jigsaw, and a few loose floaters in the middle. I’ll let you know when I’ve slotted everything into place. Fortunately they were handing out free copies of a new book, The “OSLO” Modelling Language. I got Don Box to sign my copy, so that must surely help me understand it!

Tuesday, 28 October 2008

The dynamic Future of C# 4.0

P1020889The session I’d been most looking forward to was Anders Hejlsberg’s presentation on the Future of C#. It was clear that many  others were to, because the session room, the second biggest in the convention centre, holding around 2500 people was packed. Anders stood, regarding us all with a Fatherly smile as we filled in. A few dared to approach him. One came forward to get an autograph, and was clearly so overwhelmed that, when the great man asked him for a pen, he forgot that he had one attached to his conference pass, and went back to fetch one from his seat. Others, more brazen, even asked to be photographed with him.

Then the session began. Anders kicked off by outlining the history of C#. You wouldn’t believe it, but it will be Ten years old in December, counting from conception, rather then from birth. C# 1.0 was about managed code; 2.0 was about generics; 3.0 introduced Functional Programming and LINQ. And now 4.0 introduces dynamic programming.

Anders introduced us to the dynamic keyword, which is used when declaring a variable, function return type, or parameter. As Anders said, it is used to declare that the static type of the thing is dynamic. When an variable is marked as being dynamic, C# won’t bother to resolve calls made on that object at compile-time: instead it will delay all method resolution to run time, but will use exactly the same algorithm as it would have used, so that overload resolution works correctly.

The basis of the dynamic resolution is the IDynamicObject interface, shared with the Dynamic Language Runtime. This has on it methods like Invoke which the compiler will call to allow the object itself to participate in method resolution. As well as allowing easy interaction with other dynamic languages such as IronPython and IronRuby this also has the benefit of making COM Interop much more natural. Rather than having to resort to Reflection when encountering “Peter-out typing” (Ander’s term for the phenomenon where COM object models become more loosely typed the further out along a property path you go), using dynamic typing will allow natural-looking code all the way.

Another big help when talking to COM is support for optional and named parameters. Anders made his mea culpa on this one. You can now write code in C# talking to COM the way you could ten years ago in VBA, he said; no longer do we need to call upon Type.Missing to flesh out the method calls to COM objects.

The final thing he announced was support for co-variance and contra-variance of generic interfaces and delegates. No big surprise there: Eric Lippert has been trailing this for a while (hypothetically, of course!). The syntax they’ve decided upon is that type parameters that are only going to be used to return values can be marked with out in the type declaration, whereas parameters that are used for receiving values will be marked with in. For example, IEnumerable<out T>, or IComparable<in T>. I think this is sufficiently complicated to warrant me writing another blog post on it, so that I can understand it better myself.

All in all, no huge surprises, but very useful none-the less. Anders conclusion was that C#4.0 allows you to do things in your code that you were always surprised you couldn’t do before.

But that wasn’t the end. The maestro had one last treat in store for us: a sneak preview of something coming after C# 4.0. Anders explained that for historical reasons not hard to determine, the C# compiler, being the very first C# compiler ever written, is not actually written in C#. This is a big millstone round their collective necks, he admits. So they have already started work on rewriting the compiler in C#, and when it’s done, they will release it for use by the world in its own applications.

It will include the ability to get at a language object model version of code, enabling easy refactoring or meta programming. It could be included in applications to enable runtime evaluation of expressions. To much applause, Anders concluded his session with a demonstration of how easy it is to create a REPL (Run Evaluate Print Loop) using the new C# compiler component.

Cloudy Azure Skies at the PDC

Microsoft are clearly getting in touch with their Arty side. The most recent evidence came today when they announced Windows Azure; earlier hints can be found in “Silverlight” and Project Indigo (which later became WCF).

Windows Azure is Microsoft’s new Cloud Operating System. Not an Operating System in the sense of something that you install  on your PC, but an Operating System in that it provides a layer of abstraction upon which interesting things can be built. Just as Windows XP or Windows Vista abstract away the messiness of interacting with keyboard, mouse, graphics card, disk drives, etc. to provide a nice API with which developers can do interesting things, so Windows Azure abstracts away the difficulties of purchasing hardware for data-centres, provisioning machines with identical copies of code, configuring load-balancers, backing up databases etc. and provides a nice programming model and user interface to streamline all those things.

We were shown a demo of a standard ASP.Net application being packaged up and tagged with a config file. This package was then submitted to the Azure portal, where it was given a url and then published. It was instantly available at that address. The magic came when they pretended that load on the application had increased, and more computing power was needed: just changing a number in the config file was sufficient to scale the application out across another couple of machines – Windows Azure taking care of the details.

Another component is the cloud storage. They have an API that allows Blobs, Queues and simple Tables to be managed in the cloud, with space being provided elastically. All the data is available through REST APIs, with a .Net object model on top for easier access through managed code.

And of course we developers need our tools, so there is a Windows Azure SDK which you can download today. This provides templates for Visual Studio, but more importantly, it provides a mini-version of the Azure platform for your desktop, so that you can build and test applications without needing to upload them.

In the CTP that they released today only managed code is supported, but the intention is to allow native code, and other languages like PHP in the near future. Also in the future will be a “Raw” mode that will allow Virtual Machines to be hosted in the cloud a la Amazon’s EC2.

The intention is to release this commercially in 2009, though Microsoft apparently will be guided by the community as to when they think it is ready.

Day 1 at the PDC

I’m back in my hotel room after a long day at the convention centre. My bed behind me is radiating sleepy snugliness, but I’m  ignoring it for a short while longer because I know that you’re all hungry for news.

P1020858My day in public began at 6:35 AM (I’m guessing you’re not interested in the minutiae of all that when before that). The penalty for staying so conveniently close to the Airport as I am (the runway is about 500 yards from my window) is a much longer journey to the convention centre. Microsoft have kindly provided free shuttle buses to and from all the official conference hotels, running throughout the morning. Obviously most people wanted to make sure of a good seat (and breakfast before that) at the first keynote, so the first bus, due at 6:45, but actually 10 minutes late, was heavily over subscribed. It had 17 spaces, and there were around 60 of us.

Fortunately more buses arrived at about five or ten minute intervals. Unfortunately, my training as an Englishman, did nothing to prepare me for boarding the bus in such a situation. No queue, just a dash for the door, elbows at the ready. It took several buses before I steeled myself to go for it – making space for the ladies first, of course.

The journey across LA took about 35 minutes. Another opportunity to observe those wonderful skinny palm trees, this time looming out of the misty morning, in the glow of a sun still fairly low on the horizon. Once at the convention centre I made a dash for the registration desks, expecting there to be long queues. I was pleasantly surprised at the efficiency however, a theme repeated throughout the day. I cleared reception in a few minutes, then headed for breakfast.

I’ve never eaten in a dining hall so big. This one had more serving lanes even than the humongously wide LA freeways: I reckon there were about 15 lanes, each double sided, from which hungry guests could choose a whole variety of ways of breaking their fast. At the end of the lanes were huge bins of fruit-juice bottles, nestling in ice-cubes. Then to find a table: there must have been half an acre of them to choose from, but to make it easy, conference staff positioned themselves next to vacant spaces and waved red “Available Seating” signs above their heads. If you wanted to find somebody in particular though, you needed other help. A guy on my table was phoning directions to his friend for about five minutes before they were reunited. I suggested that Microsoft might want to introduce a table level view to the next version of Virtual Earth.

P1020862 Having tucked down breakfast as fast as was decent, I made for Hall A for the Keynote. They might as well have called it Hanger A – it was big enough. Four huge screens flanked the stage, two on either side; I restricted my gaping to a few seconds however, because I needed to find Jeff Sandquist. Jeff, head of the team responsible for Channel 9, had contacted me last week to offer me one of the “best seats in the house”. Apparently he’d been following my blog, and wanted to give me a treat. Thanks again Jeff. The best seats in the house turned out to be 10 barcalounger reclining arm chairs, set up in the middle of the conference hall. Very comfortable they are too. I suspect I’ll be especially grateful tomorrow, when the keynote is scheduled to last all morning. Speakers have a hard time keeping me on the edge of my seat though!

The crowds that head towards the doors after the sessions, well! “Herd” would be a better descriptor. Mostly they head for the food troughs to stock up on snacks – tables set up in the lobbies and hallways piled high with fruit and snacks; or the watering holes -  refrigerators stocked with cans, urns of tea and coffee; even chest freezers well stocked with ice-creams. Truly a place flowing with milk and honey.

[What’s that? You wanted to hear about the sessions? Technical stuff you mean? Everyone else is writing about that. I’ll get there in next post.]

Everyone else seemed to be able to get their connectivity fix in the session rooms. My ancient work-laptop acknowledged the presence of a wireless hotspot, but refused to connect to it (Hint, hint boss – what about an upgrade? Oh, wait – there’s my expenses to pay first!). No matter, though: where there’s space in the hallways between the food tables and the refrigerators there are abundant PCs set up, all invitingly running IE8. Nice to be able to catch up with my wife over Google messenger.

There was no official dinner laid on at the end of the day. Instead there was the Partner Expo Reception. All the sponsors had chipped in to lay on a multi-cultural slap-up buffet lunch, serving points strategically located amidst the sponsor booths. I wondered around with a BBQ steak on my flimsy plate, clutching my plastic folk, wondering how I was supposed to eat this not-exactly finger food. In the end I found a perch, and managed to saw off enough to determine that it hadn’t really been worth it. Not to worry. The chicken wings, egg rolls, sticky rice, pilauf rice, shredded beef and smoked chicken wraps were all good, then pastries, Hershy’s chocolates and jelly beans more than compensated for any inconvenience caused.

But by now my body clock was reminding me that it still is not quite at home in the new time zone, and that you, my dear readers would be expecting news of my doings. So I called it a day and headed for the shuttle bus, this time un-crowded, and was deposited safely back at the hotel, where I’m just about to hit submit so that you can vicariously join in with my adventures.

Monday, 27 October 2008

An unusual Pre-con day at the PDC

Today is Pre-conference day at the Microsoft PDC. Industry luminaries and experts like Charles Petzold, Mary Poppendieck, Jeff Proise and Juval Lowy are giving attendees the benefit of their wisdom and experience on subjects ranging from WPF and WCF, through Advanced Windows debugging and .Net performance optimisation to Agile software development. But I didn’t go. Instead, I went to church.

Getting to church wasn’t just popping round the corner. I suspect that folk in LA rarely “pop round the corner” for anything. Coming from UK, everything in Los Angeles seems so spread out. I suppose that unlike the UK, they have no “green belt” to worry about: need more space? Just colonise another block of the desert. As a consequence of this capacious town planning, street maps of LA can easily mislead eyes conditioned to maps of UK cities – as I and my legs have now discovered to our cost.

The first part of the journey was easy. After stoking up for the day on an archetypal American Buffet Breakfast (waffles with Maple syrup and sausage, egg and bacon on the same plate at one point!), I sauntered out of the lobby to pick up the shuttle bus to the airport. I first saw these shuttles when I came out of Arrivals at the airport yesterday. Swarms of them circle the different Terminal pick up points, day and night. Every major hotel within hearing distance of the airport, and all the car rental companies, not to mention the long-stay parking providers, have their own fleet of buses to convey customers cost-free to their place of business.

Then it was on to the FlyAway bus service headed for Van Nuys (or Van Eyeeeeeees, as our imposing female driver called out at each stop). The journey out along the San Diego Freeway gave me an excellent sample of suburban and (even more sub)urban LA, all of which I could observe in air-conditioned comfort from my front-seat vantage point. From what I could see, the City of Angels is mostly flat, except for the lumpy bits where they dump the canyons.

The flat parts of the city are divided into streets of which the Romans would have been proud and a geometrist prouder. Many of the streets are lined with palm trees, tall, leggy things, determined not to be overshadowed by the office buildings that surround them. To me, it looked like some of them had even resorted to surgery – being sophisticated L.A. palm trees - because instead of terminating in the mass of fronds that usually marks the top of a palm, these ones had another burst of trunk, and then a second bunch of greenery. Others have gone in for body ornaments, rigging themselves up as mobile phone transmitting towers.

Having alighted at the Van Nuys bus station, I consulted my map and confirmed that I needed to head for Roscoe Boulevard to catch the final bus. The map provided by the bus company showed the bus station virtually butting up to the Boulevard (as you can see for yourself), but rather worryingly, the friendly security guard who I asked for directions had to consider for a few moments before pointing out my way. He estimated “about 5 to 10” in answer to my question of how many minutes it would take to walk there. I suspect that in a former life he was a software developer, because by the end of the walk the actual figure lay just beyond the upper end of that range.

As I stood at the final bust stop, I offered a silent prayer that the driver of the bus would be a helpful one. I knew where I needed to get to, but I didn’t have a clue which stop I needed, because the google map that I’d printed out only showed a fragment of the neighbourhood of the church; my legs baulked at the thought of getting off a block too early, and my watch pointed out that I didn’t have time for mistakes. My prayer was answered. Not only did the driver offer to call out my stop; he also refused the five dollar bill that I proffered for the $1.50 fare requiring exact change, and took just a dollar bill instead. And it turned out that the bus stop was right outside the door of the church.

P1020828I’ve never been to a church as big as this. Grace Community Church was founded fifty years ago, and their first chapel, still on  the site, was about as big as a good sized English church building. The new “worship center” can, at a guess, hold between 3 and 5 thousand, with a stage up front for a small orchestra and good sized choir.

Phillip Johnson was preaching today. I’ve been reading his blog for some years now, which was how I found out about the church, and I’ve always found him to be a very interesting and edifying writer. Today he spoke, very though-provokingly, on the third of the Ten Commandments: “You shall not take the name of Lord your God in vain, for the Lord will not hold him guiltless who takes his name in vain.” – a prohibition on using the name of God lightly, and without due reverence.  Phil made the observation that, though atheists deny there is a God, they see no contradiction in invoking his name at times of shock or frustration or anger. He reminded us that in the world of commerce, businesses protect their names very forcefully through Trademark law as their brands and reputations depend on them. So why should God care about his name any less?

In all the other sins prohibited by the commandments, there is some profit or pleasure for the sinner, however momentary or fleeting. But in breaking this commandment there is no gain whatsoever. Even though it is now a habit for many people to punctuate their conversation with God’s name, it is still an act of rebellion and defiance of this third commandment. That is why every one of us who has used God’s name lightly is guilty. But Phil concluded by reminding us of the way to be freed from all guilt: the salvation and full pardon that we can have by believing in Jesus Christ.

Plenty of food for thought whilst waiting for the bus, and then on the walk back to the Van Nuys bus station. The other thing on my mind was the sun beating down on my head. I had, at the prompting of my wife, looked up the weather for LA before I came. I’d noted that the temperature would be in the mid to high twenties, and she had thoughtfully packed short-sleeved shirts. But I failed to carry the thought through to figure out that it wasn’t going to be patio heaters providing the warmth, and to take appropriate precautions. Basics, like a bottle of water and a hat. By midday the sun was a hot as any mid-summer’s day back home, and the best I could do by way of shading myself was to stand in the shadow of a lamp post – not terribly effective when you consider my girth.

But I made it safely to the bus station without dehydrating, and lived to regale you with the tale. It’s been a thought-provoking day of rest. And now, just one more night to go before all is revealed!

Sunday, 26 October 2008

Pre-conf in the sky

Well, here I am in LA, probably about to blog a load of nonsense, because I've now been awake for 24 hours. At Five this morning, a Taxi arrived to take me to the station, to catch the six o'clock train to Heathrow, where I boarded the 11:30 flight to LA, which turned into a 12:30 flight because of techical problems with the boarding tunnels. We were safely delivered across the pond by 11:15 PM, UK time, just in time to give my wife the good news before she went to bed. I made myself stay up, because going to be when my newly adjusted watch said 3:15 just seemed ridiculous.

I feel that the conference has already begun. Seated next to me on the plane was Geff, of NxtGenUG fame. He introduced me to Guy Smith-Ferrier, whose book on .NET Internationalization I happened to have ordered just last week. Guy very kindly gave me a personalised presentation on the possibilities for localisation in WPF. It was one of the most interesting presentations I have attended, not least because the speaker was crouched in the aisle of the plane next to me, constantly putting himself on hold as fellow passenges passed by!

Later on, a few of us, including Mike Taulty from Microsoft had our very own pre-conference Open Space. I forget what was on the agenda, but it probably included localisation again, because Guy was there, and I get the impression that he has a one track mind!

Tomorrow I intended to make a bus trip up to the Sun Valley to worship at the Grace Community Church. Then, let the conference begin.

Now the room is beginning to sway. I think the sleep debt collector has come knocking...