Friday 31 October 2008

PDC Day 4: XAML, .Net 4.0, MGrammer, and F#

Has anybody got a handkerchief? sniff, snuffle, wipes away tears. PDC is over. They’ve turned off the wireless. We’ve been kicked out of the conference centre. All my new geek friends are heading home. Will I ever find people that understand me so well ever again? Never mind. It was good well it lasted, and I’ve collected a good bunch of business cards. Perhaps geek friendships work best by email and twitter anyway.

And get this: I’ve got Raymond Chen’s autograph. On his own business card. He even promised to stop by my blog. Better make sure it looks tidy; quick check for inanities: geek royalty might be here any minute. Raymond, if you’re reading this: I’ll stop gushing now! Promise!

XAML

No keynote today, but that meant more time for a whole load of interesting sessions. The first one I attended was on XAML. That’s right: a whole session on eXtensible Application Markup Language. They announced a new version of it, with a raft of new language features, and new XAML Readers and Writers.

The new language features include:

  • The ability to reference other elements by name – independently of anything that the target framework (like WPF) provides
  • Full support for Generic types, everywhere in the language – this is done by using {x:TypeArguments} when defining instances for example
  • Better support for events – including a markup extension that can return a delegate
  • The ability to define new properties of a class within XAML
  • Use of Factory methods to create instances

They’re also introducing a new library, System.Xaml.dll is its name, I believe. In this library will be classes for working with XAML in a much more fine-grained way than the current XamlReader and XamlWriter give us, and with much better performance. Basically they work with a new intermediate format of XamlNodes: kind of like an Object Model for XAML. For example, XmlXamlReader will take an xml file, and return some XamlNodes representing the xaml at a higher level of abstraction than xml. Then an ObjectWriter is used to take those XamlNodes and turn them bit by bit into objects. The cool thing is that you can do the reverse: there’s an ObjectReader that will take an object and return the XamlNodes that represent it, and a XmlXamlWriter  that push those XamlNodes to xml. They’re also making public the BamlReader and BamlWriter classes.

As a demo of all the new stuff, they showed a demo of a whole client-server app written entirely in XAML: WPF UI, Windows Workflow handling the button clicks calling to a WCF service defined in XAML with a Windows Workflow again doing the work. Impressive!

.Net 4.0

After that I went to a session on CLR Futures: basically the foundations of .Net 4.0. The nice feature here, as I already mentioned is the ability to load .Net 4.0 into the same process as .Net 2.0. Now I actually can’t see myself using this feature directly, but I’m sure it will open up a lot of opportunities in other areas, mainly around application addin models; no longer it will it cause a problem if an application loads an addin bound to one version of the .net Framework, and then a second addin needing a newer version; both can happily co-exist.

There are going to be a number of refinements throughout the CLR:

  • Improvements Garbage Collector to reduce the number of times it has to pause the application to do garbage collection
  • Improved Thread Pool to support the new Parallel Extensions framework
  • A new feature that will prevent certain critical exceptions being caught – the ones that indicate a corrupt application that should bomb out as quickly as possible: things like AccessViolation and others.
  • New Profiling APIs that will allow Profilers to attach to running applications – a featured focused on Server scenarios.
  • Managed Crash dumps can be opened in Visual Studio

Two announcements that I found most interesting: they’re going to add tuples to the BCL – we have the F# and Dynamic languages teams to thank for this, and also for the BigInteger class that they’ve finally got right, with the optimisation expertise of the Microsoft Solver Foundation team.

The second one: a new Contract class, and associated features. This is like Assert on steroids. It allows code contracts to be specified within methods. For example, there are the Contract.Requires methods that set up pre-conditions, and the Contract.Ensures method that set up post-conditions. All these conditions are established at the beginning of the method, then as a post-processing step, somebody (not quite clear whether it’s the compiler or the JIT) rearranges the post-conditions to make sure they’re checked at the end of the method. There are also going to be tools that will check a code-base for situations where these conditions might be broken – I’m guessing that Pex is the place to look for this.

MGrammar

After a hastily consumed lunch-in-a-box, I headed for Chris Andersons talk on the new MGrammar language and tooling for building Domain Specific Languages. This was exactly what I was hoping for, and Chris did a brilliant job of showing it off. The topic deserves a blog post of its own, but suffice it to say for now that the Intellipad text editor provides a great environment for developing and testing the DSL, and even provides syntax highlighting for your new language once you’ve defined it. There are also APIs that can be used in your own applications that accept a grammar and a string and will parse it, and provide back a data representation of it. Exciting stuff.

F#

The last presentation of the conference was Luca Bolognase’s Introduction to F#. Luca did a great job of showing the strong points of F#: the succinctness of the syntax and its integration with other .Net languages. The last point he showed really wowed the audience: he had written a real world application that pulled stock prices off the Yahoo finance website and did some calculations with them. The punch-line was when he add one “async” keyword and sprinkled a few “!” symbols, and the whole function became parallelized.

The remains of the day

And that was the end of the conference. But the day held one last surprise for me. On the bus back, I found myself sat next to two guys from the Walt Disney World Company. We got talking about the conference (I doubt anybody on the bus was discussing much else), and when we got back to the hotel they mentioned how they were going to meet up with some colleagues at Downtown Disney, and they invited me to go along with them. We had a great dinner down there, munching over M and F# as well as pizza. So Dave and Kevin: thanks a bunch. It was awesome!

PDC is at an end. Tomorrow I fly home. Then the real fun begins: making sense of everything I’ve heard, and putting it to work.

Thursday 30 October 2008

PDC Day 3: Microsoft Research

I’ve not given Google Reader much attention over the last few days – I’ve been a bit busy with other stuff – so tonight when I finished up I thought I better pay down some of my aggregator debt: I had something like 450 posts to look through, and my aged laptop stuttered through them rather slowly. Thus I don’t have a great deal of time to give you today’s news before I fall asleepppppppppppppppppppppppppp – oops - at the keyboard: a guy sat in the armchair next to me in the keynote did just that, and his whole screen filled up with zs!

The Keynote

P1020974 They Key note this morning was given by Rick Rashid, Head (or Director, or Vice President or whatever top-flight title they’ve given him) of Microsoft Research. He’s clearly a distinguished guy, even if he did say so himself. He’s been in the same job for 17 years. Before that he worked on various groundbreaking projects, like NeXT OS, which later became MacOS X, and one of the very first networking games, AltoTrek. Since joining Microsoft Research, Rick has led the team that delivered the first version of DirectX,  and other tools that became shipping products.

After its 17 years of growth, Microsoft Research now has more than 800 researchers: that was equivalent to creating a new computer science faculty every year. They have Turing and Field medal winners in their ranks, and more members of the National Academy of Engineering than IBM.

Rick mentioned a couple of interesting areas of research. One was Theorem Proving software. For example, they have developed Terminator: software that is able to prove termination for a very large class of programs. In connection with this, one Microsoft Researcher proved Church’s Thesis, which was an open problem for 50 years.

P1020983Changing up a gear (for energy efficiency), Rick introduced a colleague who talked about the work they were doing with sensor networks. As a demo, they had rigged up the PDC hall with a network of 90 environmental sensors. Live on stage they showed the temperature readings that the sensors were giving, superimposed on a view of the hall from Virtual Earth. The presenter showed fast forwarded through the data, showing the hall cooling at night, then warming up again as the lights were turned on, then even more so in the regions around the doors as attendees streamed in. This kind of information can be used to optimise the use of Air Conditioning in a building, for example. Microsoft themselves are using this to make their new Data Centers more energy efficient. An extension of this is SensorWeb, a web-based Sensor sharing project (all hosted in the Cloud of course) that allows many researchers from all over the world to contribute their own sensor data to a big pot for interesting analysis.

Rick then flicked through some other demos from the Computational Biology arena (Human Genome decoding, HIV research) – there’s even code for this stuff that you can get from CodePlex.

They finished with two cool demos. One was of Boku, a game for Children to teach them how to program. They’ll release a version of it for the XBox later next year. Children can create their own games by putting objects and characters in a world, then visually assigning rules to the things to tell them how to behave. For example, you can drop a couple of apples in the world, then configure a little creature to move towards an apple when sighting it. It looked great.

The other demo was of a future version of the Microsoft Surface device called SecondLight. This one uses some clever materials to allow secondary displays in the space above the surface. They showed Virtual Earth in satellite view display on the surface, then they held a piece of tracing paper above the device, and the street view was projected onto the tracing paper. Cool stuff. It works by using a voltage to control toggle the surface material very quickly between opaque and transparent. While it is opaque the surface display is project; when it is transparent, the image for the secondary display is shown.

The Sessions

On that high note the keynote ended, and I arose from my comfy chair for the last time. I attended Daniel Moth’s excellently presented session on the Parallel Task library, and the features they are adding to Visual Studio 2010 to support it. They announced that the library (which includes PLINQ) will be shipping with .Net 4.0. In Visual Studio there will be two new features to debug Tasks (which are like light-weight threads): the Parallel Tasks window which is a bit like the Thread window, but shows running and scheduled Tasks; then there’s the parallel Stacks window which shows a tree view of all Tasks and their relationships, and the stack for each Task. There’s a good MSDN article on these features.

I spent most of the rest of the day in Oslo sessions. I think the picture is becoming a bit clearer now; I’m going to one last session tomorrow from Chris Anderson to learn about the language for building DSLs (In other news, I added his autograph to my Oslo Modelling book today). After that I hope to blog my impressions of it. In the meantime, you’ll have to content yourselves with Martin Fowler’s analysis!

One last piece of excitement. I filled in my session evaluation forms today, being the good boy that I am. After completing one of them an announcement came up on screen that I’d won a prize. Since it didn’t invite me to Click Here!, but rather to go to the main information desk, I took it seriously, but didn’t hope for more than a tee-shirt. I was actually handed a copy of Windows Vista Ultimate. Now since I already have a spare copy of Vista, I’m inclined to find an innovative way of giving it away. Watch this space!

Wednesday 29 October 2008

PDC Day 2: Windows 7, VS2010, Office 14 and Oslo

I’m dedicated. While all the other PDCers are still out partying at Universal Studios, I came back to my hotel room in order to bring you the news from the PDC. Or to put it another way, I wasn’t really taken with the chainsaw-wielding zombies lunging at the legs of passing guests, or the ghouls that lurked behind pillars and leaped out to induce a scream. I picked up a free meal at Pizza Hut, including the biggest Funnel Cake I’ve ever seen, and then snuck my way back to the bus, making sure to keep well away from the scaries.

But lets go back to the start. For me, Day 2 of PDC tops Day 1 by some margin. Yesterday’s Keynote by Ray Ozzie on Azure was heavy on marketing but light on the interesting stuff. Today’s all-morning-long keynote was packed with geekness.

Windows 7

The first new thing to be demoed was Windows 7 (that’s not a codename by the way. They’re actually calling it that). Although they’re positioning it as “Windows Vista done right” there’s actually some cool new stuff here – features that should be really useful. New Window Management features for a start. How much time do you waste positioning your windows so that you can see them side by side? Now you can drag them towards the top or sides of the screen, and they will dock – rather like in Visual Studio. “Jumpers” on menu bars is another. These are little application specific tasks that an app can display hanging off its icon in the start menu even before it’s launched.

All the utilities, like Paint and Wordpad, get an overhaul (“we’ve decided we’ll do it once every fifteen years, whether they need it or not”, said Steven Sinofsky). They are all Ribbonified, and Wordpad gains Open XML and ODF support. For developers, there’s the nice feature of being able to mount Virtual Hard Disks (VHDs) from within Windows, and even boot from them. And then there’s finally proper multi-monitor support for Remote Desktopping.

And Microsoft would not like me to forget multi-touch. If you have a touch enabled screen you’ll be able to use multiple fingers to manipulate things. They demoed all the cool zooming and scrolling and gestures stuff that we’ve envied on the iPhone.

Lastly, but not leastly, one that the UK government will surely appreciate: BitLocker encryption for USB memory sticks. I need say no more.

.Net and Visual Studio

Scott Guthrie came on to the stage next, to much cheering and clapping, and it was well-deserved. He brought news of a new set of controls for WPF being released to web today. Amongst those going live are the DataGrid, a new DatePicker, a Calender control, and the Visual State Manager that has been ported from Silverlight. All these can be found on Codeplex. They’ve also released a CTP of an Office 2007-style Ribbon Control and RibbonWindow that they’ve been promising for a little while; this is apparently to be found on the Office UI site, but I couldn’t see it!

On the .Net 4.0 front, the Gu-ru announced that CLR 2.0 and 4.0 will run side by side in process. This is good news for Addin developers, and may also hold promise for those that want to develop shell extensions in .Net. There will also be improvements for WPF such as DeepZoom integration (bringing parity with Silverlight) and improved Text rendering. Other than that, they’re being rather vague. I went to a whole presentation on WPF Futures later in the afternoon, only to discover that they didn’t really have any firm plans they wanted to talk about beyond the controls they’ve already announced.

The most exciting news in this area is that Visual Studio 2010 is going to be rewritten to use WPF and managed code. They’ll be making use of the Managed Extensibility Framework (MEF) to allow anybody to create extensions to it. As an example, Mr Guthrie created a new ScottGu mode for code comments. He used managed code to write an extension to the text editor that displayed the xml comments above a method in a rich WPF view – including Bug numbers formatted as hyperlinks that could be clicked to see full details.

Office 14

The big news for Office is that there will be new web based versions of Word, Excel and One Note, written in Silverlight. These will allow collaborative editing of documents, with automatic synchronisation of changes when multiple users are editing a document. They demoed a spreadsheet being edited in the browser, complete with charting and formulae support. The UI looks much the same as in the Desktop, because they’ve created a Ribbon control for Silverlight.

Oslo

Oslo, for me, was the wildcard of the PDC. It sounded exciting, but would it be useful? I was expecting that this would feature in a keynote, especially when I saw that Chris and Don were scheduled for a slot in Hall A. But they spent that time doing a live coding exercise in Azure which was interesting, but not what I hoped for. Instead we had to wait till the afternoon to discover what it is all about. And I’m still not quite sure!

What I saw this afternoon was a new language called “M”. This M allows data to be defined and stored in a database (a repository, as they call it), and allows queries to be written on top of this data in a strongly typed fashion. Don Box used the analogy that M is to T-SQL what C is to assembler. The idea, it seems, is to make it very easy to write applications that are configured, and even driven by data. One example might be a business rules application, where the rules are written in M and pushed into the repository. The application can then query the repository, and determine how it is supposed to behave.

Another component to Oslo is another language galled MGrammer. MGrammer is a language for defining Domain Specific languages: in fact, the M language itself is defined using MGrammer. MGrammer allows the syntax of a language to be defined (in a way similar to ANTLR, if you’ve ever used that) along with projections that map from the DSL to M so that your DSL can then be interpreted by querying the repository.

There’s a nice text editing tool for this (IntelliPad), and a graphical tool (Quadrant) as well, though I’ve not seen that yet. Everything, including the compiler, is in managed code, and it is all highly extensible.

I will freely admit that, at the moment I only have the edge pieces of the jigsaw, and a few loose floaters in the middle. I’ll let you know when I’ve slotted everything into place. Fortunately they were handing out free copies of a new book, The “OSLO” Modelling Language. I got Don Box to sign my copy, so that must surely help me understand it!

Tuesday 28 October 2008

The dynamic Future of C# 4.0

P1020889The session I’d been most looking forward to was Anders Hejlsberg’s presentation on the Future of C#. It was clear that many  others were to, because the session room, the second biggest in the convention centre, holding around 2500 people was packed. Anders stood, regarding us all with a Fatherly smile as we filled in. A few dared to approach him. One came forward to get an autograph, and was clearly so overwhelmed that, when the great man asked him for a pen, he forgot that he had one attached to his conference pass, and went back to fetch one from his seat. Others, more brazen, even asked to be photographed with him.

Then the session began. Anders kicked off by outlining the history of C#. You wouldn’t believe it, but it will be Ten years old in December, counting from conception, rather then from birth. C# 1.0 was about managed code; 2.0 was about generics; 3.0 introduced Functional Programming and LINQ. And now 4.0 introduces dynamic programming.

Anders introduced us to the dynamic keyword, which is used when declaring a variable, function return type, or parameter. As Anders said, it is used to declare that the static type of the thing is dynamic. When an variable is marked as being dynamic, C# won’t bother to resolve calls made on that object at compile-time: instead it will delay all method resolution to run time, but will use exactly the same algorithm as it would have used, so that overload resolution works correctly.

The basis of the dynamic resolution is the IDynamicObject interface, shared with the Dynamic Language Runtime. This has on it methods like Invoke which the compiler will call to allow the object itself to participate in method resolution. As well as allowing easy interaction with other dynamic languages such as IronPython and IronRuby this also has the benefit of making COM Interop much more natural. Rather than having to resort to Reflection when encountering “Peter-out typing” (Ander’s term for the phenomenon where COM object models become more loosely typed the further out along a property path you go), using dynamic typing will allow natural-looking code all the way.

Another big help when talking to COM is support for optional and named parameters. Anders made his mea culpa on this one. You can now write code in C# talking to COM the way you could ten years ago in VBA, he said; no longer do we need to call upon Type.Missing to flesh out the method calls to COM objects.

The final thing he announced was support for co-variance and contra-variance of generic interfaces and delegates. No big surprise there: Eric Lippert has been trailing this for a while (hypothetically, of course!). The syntax they’ve decided upon is that type parameters that are only going to be used to return values can be marked with out in the type declaration, whereas parameters that are used for receiving values will be marked with in. For example, IEnumerable<out T>, or IComparable<in T>. I think this is sufficiently complicated to warrant me writing another blog post on it, so that I can understand it better myself.

All in all, no huge surprises, but very useful none-the less. Anders conclusion was that C#4.0 allows you to do things in your code that you were always surprised you couldn’t do before.

But that wasn’t the end. The maestro had one last treat in store for us: a sneak preview of something coming after C# 4.0. Anders explained that for historical reasons not hard to determine, the C# compiler, being the very first C# compiler ever written, is not actually written in C#. This is a big millstone round their collective necks, he admits. So they have already started work on rewriting the compiler in C#, and when it’s done, they will release it for use by the world in its own applications.

It will include the ability to get at a language object model version of code, enabling easy refactoring or meta programming. It could be included in applications to enable runtime evaluation of expressions. To much applause, Anders concluded his session with a demonstration of how easy it is to create a REPL (Run Evaluate Print Loop) using the new C# compiler component.

Cloudy Azure Skies at the PDC

Microsoft are clearly getting in touch with their Arty side. The most recent evidence came today when they announced Windows Azure; earlier hints can be found in “Silverlight” and Project Indigo (which later became WCF).

Windows Azure is Microsoft’s new Cloud Operating System. Not an Operating System in the sense of something that you install  on your PC, but an Operating System in that it provides a layer of abstraction upon which interesting things can be built. Just as Windows XP or Windows Vista abstract away the messiness of interacting with keyboard, mouse, graphics card, disk drives, etc. to provide a nice API with which developers can do interesting things, so Windows Azure abstracts away the difficulties of purchasing hardware for data-centres, provisioning machines with identical copies of code, configuring load-balancers, backing up databases etc. and provides a nice programming model and user interface to streamline all those things.

We were shown a demo of a standard ASP.Net application being packaged up and tagged with a config file. This package was then submitted to the Azure portal, where it was given a url and then published. It was instantly available at that address. The magic came when they pretended that load on the application had increased, and more computing power was needed: just changing a number in the config file was sufficient to scale the application out across another couple of machines – Windows Azure taking care of the details.

Another component is the cloud storage. They have an API that allows Blobs, Queues and simple Tables to be managed in the cloud, with space being provided elastically. All the data is available through REST APIs, with a .Net object model on top for easier access through managed code.

And of course we developers need our tools, so there is a Windows Azure SDK which you can download today. This provides templates for Visual Studio, but more importantly, it provides a mini-version of the Azure platform for your desktop, so that you can build and test applications without needing to upload them.

In the CTP that they released today only managed code is supported, but the intention is to allow native code, and other languages like PHP in the near future. Also in the future will be a “Raw” mode that will allow Virtual Machines to be hosted in the cloud a la Amazon’s EC2.

The intention is to release this commercially in 2009, though Microsoft apparently will be guided by the community as to when they think it is ready.

Day 1 at the PDC

I’m back in my hotel room after a long day at the convention centre. My bed behind me is radiating sleepy snugliness, but I’m  ignoring it for a short while longer because I know that you’re all hungry for news.

P1020858My day in public began at 6:35 AM (I’m guessing you’re not interested in the minutiae of all that when before that). The penalty for staying so conveniently close to the Airport as I am (the runway is about 500 yards from my window) is a much longer journey to the convention centre. Microsoft have kindly provided free shuttle buses to and from all the official conference hotels, running throughout the morning. Obviously most people wanted to make sure of a good seat (and breakfast before that) at the first keynote, so the first bus, due at 6:45, but actually 10 minutes late, was heavily over subscribed. It had 17 spaces, and there were around 60 of us.

Fortunately more buses arrived at about five or ten minute intervals. Unfortunately, my training as an Englishman, did nothing to prepare me for boarding the bus in such a situation. No queue, just a dash for the door, elbows at the ready. It took several buses before I steeled myself to go for it – making space for the ladies first, of course.

The journey across LA took about 35 minutes. Another opportunity to observe those wonderful skinny palm trees, this time looming out of the misty morning, in the glow of a sun still fairly low on the horizon. Once at the convention centre I made a dash for the registration desks, expecting there to be long queues. I was pleasantly surprised at the efficiency however, a theme repeated throughout the day. I cleared reception in a few minutes, then headed for breakfast.

I’ve never eaten in a dining hall so big. This one had more serving lanes even than the humongously wide LA freeways: I reckon there were about 15 lanes, each double sided, from which hungry guests could choose a whole variety of ways of breaking their fast. At the end of the lanes were huge bins of fruit-juice bottles, nestling in ice-cubes. Then to find a table: there must have been half an acre of them to choose from, but to make it easy, conference staff positioned themselves next to vacant spaces and waved red “Available Seating” signs above their heads. If you wanted to find somebody in particular though, you needed other help. A guy on my table was phoning directions to his friend for about five minutes before they were reunited. I suggested that Microsoft might want to introduce a table level view to the next version of Virtual Earth.

P1020862 Having tucked down breakfast as fast as was decent, I made for Hall A for the Keynote. They might as well have called it Hanger A – it was big enough. Four huge screens flanked the stage, two on either side; I restricted my gaping to a few seconds however, because I needed to find Jeff Sandquist. Jeff, head of the team responsible for Channel 9, had contacted me last week to offer me one of the “best seats in the house”. Apparently he’d been following my blog, and wanted to give me a treat. Thanks again Jeff. The best seats in the house turned out to be 10 barcalounger reclining arm chairs, set up in the middle of the conference hall. Very comfortable they are too. I suspect I’ll be especially grateful tomorrow, when the keynote is scheduled to last all morning. Speakers have a hard time keeping me on the edge of my seat though!

The crowds that head towards the doors after the sessions, well! “Herd” would be a better descriptor. Mostly they head for the food troughs to stock up on snacks – tables set up in the lobbies and hallways piled high with fruit and snacks; or the watering holes -  refrigerators stocked with cans, urns of tea and coffee; even chest freezers well stocked with ice-creams. Truly a place flowing with milk and honey.

[What’s that? You wanted to hear about the sessions? Technical stuff you mean? Everyone else is writing about that. I’ll get there in next post.]

Everyone else seemed to be able to get their connectivity fix in the session rooms. My ancient work-laptop acknowledged the presence of a wireless hotspot, but refused to connect to it (Hint, hint boss – what about an upgrade? Oh, wait – there’s my expenses to pay first!). No matter, though: where there’s space in the hallways between the food tables and the refrigerators there are abundant PCs set up, all invitingly running IE8. Nice to be able to catch up with my wife over Google messenger.

There was no official dinner laid on at the end of the day. Instead there was the Partner Expo Reception. All the sponsors had chipped in to lay on a multi-cultural slap-up buffet lunch, serving points strategically located amidst the sponsor booths. I wondered around with a BBQ steak on my flimsy plate, clutching my plastic folk, wondering how I was supposed to eat this not-exactly finger food. In the end I found a perch, and managed to saw off enough to determine that it hadn’t really been worth it. Not to worry. The chicken wings, egg rolls, sticky rice, pilauf rice, shredded beef and smoked chicken wraps were all good, then pastries, Hershy’s chocolates and jelly beans more than compensated for any inconvenience caused.

But by now my body clock was reminding me that it still is not quite at home in the new time zone, and that you, my dear readers would be expecting news of my doings. So I called it a day and headed for the shuttle bus, this time un-crowded, and was deposited safely back at the hotel, where I’m just about to hit submit so that you can vicariously join in with my adventures.

Monday 27 October 2008

An unusual Pre-con day at the PDC

Today is Pre-conference day at the Microsoft PDC. Industry luminaries and experts like Charles Petzold, Mary Poppendieck, Jeff Proise and Juval Lowy are giving attendees the benefit of their wisdom and experience on subjects ranging from WPF and WCF, through Advanced Windows debugging and .Net performance optimisation to Agile software development. But I didn’t go. Instead, I went to church.

Getting to church wasn’t just popping round the corner. I suspect that folk in LA rarely “pop round the corner” for anything. Coming from UK, everything in Los Angeles seems so spread out. I suppose that unlike the UK, they have no “green belt” to worry about: need more space? Just colonise another block of the desert. As a consequence of this capacious town planning, street maps of LA can easily mislead eyes conditioned to maps of UK cities – as I and my legs have now discovered to our cost.

The first part of the journey was easy. After stoking up for the day on an archetypal American Buffet Breakfast (waffles with Maple syrup and sausage, egg and bacon on the same plate at one point!), I sauntered out of the lobby to pick up the shuttle bus to the airport. I first saw these shuttles when I came out of Arrivals at the airport yesterday. Swarms of them circle the different Terminal pick up points, day and night. Every major hotel within hearing distance of the airport, and all the car rental companies, not to mention the long-stay parking providers, have their own fleet of buses to convey customers cost-free to their place of business.

Then it was on to the FlyAway bus service headed for Van Nuys (or Van Eyeeeeeees, as our imposing female driver called out at each stop). The journey out along the San Diego Freeway gave me an excellent sample of suburban and (even more sub)urban LA, all of which I could observe in air-conditioned comfort from my front-seat vantage point. From what I could see, the City of Angels is mostly flat, except for the lumpy bits where they dump the canyons.

The flat parts of the city are divided into streets of which the Romans would have been proud and a geometrist prouder. Many of the streets are lined with palm trees, tall, leggy things, determined not to be overshadowed by the office buildings that surround them. To me, it looked like some of them had even resorted to surgery – being sophisticated L.A. palm trees - because instead of terminating in the mass of fronds that usually marks the top of a palm, these ones had another burst of trunk, and then a second bunch of greenery. Others have gone in for body ornaments, rigging themselves up as mobile phone transmitting towers.

Having alighted at the Van Nuys bus station, I consulted my map and confirmed that I needed to head for Roscoe Boulevard to catch the final bus. The map provided by the bus company showed the bus station virtually butting up to the Boulevard (as you can see for yourself), but rather worryingly, the friendly security guard who I asked for directions had to consider for a few moments before pointing out my way. He estimated “about 5 to 10” in answer to my question of how many minutes it would take to walk there. I suspect that in a former life he was a software developer, because by the end of the walk the actual figure lay just beyond the upper end of that range.

As I stood at the final bust stop, I offered a silent prayer that the driver of the bus would be a helpful one. I knew where I needed to get to, but I didn’t have a clue which stop I needed, because the google map that I’d printed out only showed a fragment of the neighbourhood of the church; my legs baulked at the thought of getting off a block too early, and my watch pointed out that I didn’t have time for mistakes. My prayer was answered. Not only did the driver offer to call out my stop; he also refused the five dollar bill that I proffered for the $1.50 fare requiring exact change, and took just a dollar bill instead. And it turned out that the bus stop was right outside the door of the church.

P1020828I’ve never been to a church as big as this. Grace Community Church was founded fifty years ago, and their first chapel, still on  the site, was about as big as a good sized English church building. The new “worship center” can, at a guess, hold between 3 and 5 thousand, with a stage up front for a small orchestra and good sized choir.

Phillip Johnson was preaching today. I’ve been reading his blog for some years now, which was how I found out about the church, and I’ve always found him to be a very interesting and edifying writer. Today he spoke, very though-provokingly, on the third of the Ten Commandments: “You shall not take the name of Lord your God in vain, for the Lord will not hold him guiltless who takes his name in vain.” – a prohibition on using the name of God lightly, and without due reverence.  Phil made the observation that, though atheists deny there is a God, they see no contradiction in invoking his name at times of shock or frustration or anger. He reminded us that in the world of commerce, businesses protect their names very forcefully through Trademark law as their brands and reputations depend on them. So why should God care about his name any less?

In all the other sins prohibited by the commandments, there is some profit or pleasure for the sinner, however momentary or fleeting. But in breaking this commandment there is no gain whatsoever. Even though it is now a habit for many people to punctuate their conversation with God’s name, it is still an act of rebellion and defiance of this third commandment. That is why every one of us who has used God’s name lightly is guilty. But Phil concluded by reminding us of the way to be freed from all guilt: the salvation and full pardon that we can have by believing in Jesus Christ.

Plenty of food for thought whilst waiting for the bus, and then on the walk back to the Van Nuys bus station. The other thing on my mind was the sun beating down on my head. I had, at the prompting of my wife, looked up the weather for LA before I came. I’d noted that the temperature would be in the mid to high twenties, and she had thoughtfully packed short-sleeved shirts. But I failed to carry the thought through to figure out that it wasn’t going to be patio heaters providing the warmth, and to take appropriate precautions. Basics, like a bottle of water and a hat. By midday the sun was a hot as any mid-summer’s day back home, and the best I could do by way of shading myself was to stand in the shadow of a lamp post – not terribly effective when you consider my girth.

But I made it safely to the bus station without dehydrating, and lived to regale you with the tale. It’s been a thought-provoking day of rest. And now, just one more night to go before all is revealed!

Sunday 26 October 2008

Pre-conf in the sky

Well, here I am in LA, probably about to blog a load of nonsense, because I've now been awake for 24 hours. At Five this morning, a Taxi arrived to take me to the station, to catch the six o'clock train to Heathrow, where I boarded the 11:30 flight to LA, which turned into a 12:30 flight because of techical problems with the boarding tunnels. We were safely delivered across the pond by 11:15 PM, UK time, just in time to give my wife the good news before she went to bed. I made myself stay up, because going to be when my newly adjusted watch said 3:15 just seemed ridiculous.

I feel that the conference has already begun. Seated next to me on the plane was Geff, of NxtGenUG fame. He introduced me to Guy Smith-Ferrier, whose book on .NET Internationalization I happened to have ordered just last week. Guy very kindly gave me a personalised presentation on the possibilities for localisation in WPF. It was one of the most interesting presentations I have attended, not least because the speaker was crouched in the aisle of the plane next to me, constantly putting himself on hold as fellow passenges passed by!

Later on, a few of us, including Mike Taulty from Microsoft had our very own pre-conference Open Space. I forget what was on the agenda, but it probably included localisation again, because Guy was there, and I get the impression that he has a one track mind!

Tomorrow I intended to make a bus trip up to the Sun Valley to worship at the Grace Community Church. Then, let the conference begin.

Now the room is beginning to sway. I think the sleep debt collector has come knocking...

Thursday 16 October 2008

Project Euler Code Clearance

Roll up! Roll up! Get them while they're hot, don't leave them 'till they're not. Get your genuine, authentic Functional Fun Project Euler solutions here. Bargain prices, can't beat 'em. Visual Studio solution thrown in free.

I've been pretty busy over the last week or so, and it looks like I'll continue to be busy until I depart for LA at the end of next week. And you know what that means: .Net 4, C# 4, Oslo, Cloud Computing, Windows 7. That gang of acronyms and code names should keep me in blog posts for a good long while. Meanwhile, I have several Project Euler solutions languishing in the basement of my hard-disk (fourth platter, third sector, last block on the right, I think), and I can't see myself getting the time to do them justice in individual blog posts.

So today I'm going to do them an injustice and launch them into the world with barely a scant mention each. As usual, you can get the code from MSDN Code Gallery.

Problem 20: Summing the digits of a Factorial

Problem 20 gave me a chance to reuse my code for summing large numbers again. That's because calculating a factorial can be done iteratively. Imagine the sequence 1!, 2!, 3!, etc. You get the (n + 1)th term by multiplying the nth term by itself n + 1 times (that's just the definition of factorial). And the multiplication boils down to adding up n + 1 copies of the nth term. Wrap that simple idea inside an application of Unfold, and you have the solution.

Problem 22: Number crunching names

Problem 22 is just a file processing problem. Nothing more to it than that. It has a nice, one line solution with LINQ though:

return File.ReadAllText(problemDataFileName)
           .Split(new[] {','})
           .Select(name => name.Trim(new[] {'\"'}))
           .OrderBy(name => name)
           .Select((name, index) => (index + 1) * name.ToCharArray()
													  .Select(letter => (letter - 'A') +1)
                                                      .Sum())
           .Sum();

Problem 25: Big Fibonacci numbers

In Problem 25, Euler wants to know the first term in the Fibonacci sequence that has 1000 digits, which is of course his way of getting us to find a way of computing with big numbers. No problem to us though: we've got the Unfold method to calculate the Fibonacci sequence - and did I mention the code I wrote before that we can use to sum the big numbers we'll find?

Problem 31: Counting Coin Combinations

I'm quite pleased with my solution to Problem 31, wherein Euler asks us to count the number of ways in which £2 can be given using the British coinage system. I came up with a recursive algorithm that starts with the value of change you wish to give and a list of available coin types. Then it removes the biggest coin from the list, works out how many times that coin could be used, and for each possibility calculates the number of combinations for the reduced amount, and the reduced coin set by calling itself recursively.

For example, if we needed to make £1 using 10p and 2p, and 1p coins, we'd start by seeing that we could use between zero and ten 10p coins, so there are eleven possibilities we need to investigate. For each possibility we calculate how much remains once we've put down the appropriate number of 10 pences, then use the same algorithm again, but considering just 2p and 1p coins.

Problem 36: Binary and Decimal Palindromes

We last encountered numeric palindromes in Problem 4: they're numbers that read the same in both directions. In Problem 4 we were only interested in numbers that are palindromic when written in base 10. Problem 36 asks us to count the numbers that are palindromic when written in binary and in decimal. The most interesting part about this problem was finding a number's representation in binary. I could probably have used the BitArray class to do this, but instead chose to use this LINQ query:

private static IEnumerable<bool> GetNumberBits(uint number)
{
    if (number == 0)
    {
        return new[] {false};
    }

    // iterate through the bit positions, checking whether that
    // bit of the the number is set;
    // do it in reverse, so that we can ignore leading zeros
    return 31.To(0)
        .Select(bit => (number & (1 << bit)) == (1 << bit))
        .SkipWhile(bit => !bit);
}

Problem 112: Finding the density of Bouncy numbers

Mathematicians don't try to hide their obsession with numbers, do they? They make it plain as day that they count numbers amongst their closest friends, by giving them all names. It's the Bouncy numbers that are the heroes of Problem 112. These are numbers that, considering their digits as a sequence have a neither increasing nor decreasing sequence. My solution to this problem puts the Aggregate and LazilyAggregate methods to a rather unconventional use.

Tuesday 7 October 2008

Doing Planning the Agile way

So we're going to use Agile to manage the development of our secret new product. What does that actually entail? I'm not qualified to say for the general case, but I can show and tell how we've been doing it on our project. Honesty reminds me to state that we didn't invent any of these ideas: most of them came from Mike Cohn's excellent books User Stories Appliedand Agile Estimating and Planning.

Handlers planning their dog's route round a Dog Agility courseTelling Stories

My first job, once we decided to go Agile was to fill up the Product Backlog. This is our wish-list of everything we would like to go into the product at some point, though not necessarily in release one. It contains a whole bunch of User Stories, which are concise descriptions of pieces of functionality that a user would like to be in the software.

There's no IEEE standard for User Stories, and that's a good thing, because they're meant to be written either by the end users of the software, or at least as if the users of the software had written them. How many IEEE standards do you know that can be implemented by your clients?

But don't panic: just because there's no standard, doesn't mean there's no help. We followed along with Mike Cohn's suggestion of writing User Stories in the form "As [some kind of user], I want [something] so that [I get this benefit]". In some cases we went on to record some high-level details about how the feature might work, but nothing more than a few sentences. User Stories are supposed to be placeholders for conversations that we'll have with our users (or pretend users) nearer the time when we implement the feature. I found the acronym INVEST helpful: User Stories should be Independent, Negotiable, Valuable, Estimable, Small, Testable.

In our Backlog we've got stories like "As a User, I want to be able to reset my password myself if I forget it, so that I don't get shouted at by the Administrator" and "As a Sales Manager, I want to be able to issue new license keys to customers so that I can make more sales and get a bigger bonus".

We'd already started along the well-trodden road of writing functional specifications before we were lured in a different direction by Agile, so creating our Product Backlog was mostly reverse engineering: working out what reason a user would have for needing each piece of functionality that we'd specified. This was a useful exercise in itself, as I could make sure that all the features had a reason for being other than "Wow! That would be cool to program."

Another time, I'd probably build up the Backlog starting with a Project Charter, or a high level overview of what we want to achieve, and then using a mind-mapping technique to break this down into stories.

Playing at Estimating

So now we know what our software's going to look like. Can we get it done by next week, as the boss wants? The first step in answering that is deciding how big the project is, and given our Product Backlog, the best way of measuring that is by sizing the individual stories. In fact, we don't even need to calculate an absolute size, just a measurement that ranks stories against each other.

For this reason we chose to measure size in Story Points. This isn't a unit that ISO can help you with; each team will have its own definition of a Story Point, which should emerge naturally through the course of the project. We could have chosen to measure in Ideal Days (days assumed to be without distractions and interruptions), but we again heeded our virtual mentor's advice that this would slow us down, as we'd start to think too much in terms of individual tasks, rather than how a particular story compares in size with others in the list.

The one problem with using abstract units like Story Points is deciding how big one is. We solved that problem by scanning though the list and picking a story that looked pretty small, and a story that looked fairly large and assigning them a 1 and an 8 respectively. We then measured other stories up against these two.

The other thing we agreed on was a sequence of "buckets" for holding stories of different sizes. For small stories, it's relatively easy to agree whether they differ by 1 or two points of complexity; but as features get bigger, it also become more difficult to estimate precise differences between them. So we created buckets with sizes of 1/2, 1, 2, 3, 5, 8, 13, 20, 40, 60, 80, 100 Story Points each (you might recognise that as a kind of rounded Fibonacci sequence); the agreement was that if a story was felt to be too big to fit in one bucket, then it would have to go in the next one up. A story that was felt to be bigger than an 8, for example, would have to be assigned to the 13 bucket.

Planning Poker CardsWith that sorted, we were ready to play Planning Poker. I made my own cards (in Word 2007) each showing the size of one of the buckets, one deck for each developer. If you want to play along, but don't like my cards, you can buy professional decks from a couple of sources. The "game" works like this.

We'd pick a Story and discuss it. We had all the people in the room we needed to make sure that questions about the scope of the story got answered. We then had a moment of quiet contemplation to come to our own individual conclusions about the size of the story, and to pick the appropriate card from our hands. Then, on the count of three, everybody placed their card on the table. If everybody had estimated the same, great! we recorded it in our planning tool (VersionOne). If not we talked some more. What made Simon give the story a 5, while Roman gave it 1? Then we had another round of cards - or sometimes just negotiated our way to an agreement.

It felt a little strange at first, but soon became quite natural. It's amazing how liberating it is to work by comparing stories, rather than by hacking them into tasks. We had a backlog of about 130 stories, and it took us just under three sessions of 4 hours each to get through the list - not bad for a first go, I thought.

The final thing we did was to triangulate: to go through all the stories that we'd put in a particular bucket and to make sure that they truly belonged there. Was this story packed so full of work that it was flowing over the top of the bucket? Move it up a bucket. What about that one, huddled down in the corner? That would surely fit in the next bucket down?

Self-adjusting estimates

It was tempting, back when we had a Backlog, but no sizes assigned to the Stories to jump straight to the stage of estimating a duration for the project. But that would have bypassed one of the big benefits of using Story Points: self-adjusting estimates.

Imagine you were going on a journey, and you didn't have Google Maps to help you plan. One way of estimating your journey time might be to look at all the cities (or motorway junctions, or other way-points) that you have to pass and guess at the time needed to travel between each. Now suppose you've set off on your journey and travelling the first few stages has taken longer than expected. The children are in the back of the car chorusing "are we there yet?". "No", you say. "How long?" they ask. And what do you tell them? You'd have to work out the journey times for the remaining way-points and apply some kind of scaling factor in order to give them an answer. But you don't do it that way. Do you?

Instead, before you set off, you calculate the total distance. Then, as you're driving you guess at your average velocity. At any time you can divide the remaining distance by the average velocity to give a fairly good estimate of when you'll arrive, that, because you've used historical data, automatically factors in things like, how overloaded the car is and how bad the traffic has been. If your kids have their lawyers on the phone ready to sue if you don't arrive exactly when stated you can even use your minimum and maximum velocity to give them a range of times between which you might arrive.

And so it is with Story Points. They say nothing about duration: they are simply a measure of size - like using distance as the first step in estimating journey times. Velocity is a measure of how many Story points you complete in an iteration. Estimating the duration of the project is then as simple as dividing remaining Story Points by velocity, and multiplying up by the iteration length.

But we haven't completed an iteration yet, so how do we know what velocity to use? If we had done a similar project using agile we might be able to apply historical values. This might be the case when we're working on version 2 of our product. But for now, we need to go with a forecast of our velocity.

We started by estimating how many productive hours we would have from all developers during an iteration (of three weeks in our case). Industry veterans reckon on up to 6.5 productive hours in an eight-hour working day, though we're still debating this in our company.

Then we picked a small sample of stories of each size from our backlog, trying to include a mix of those that focused on the UI as well as those that mainly concerned the server. Breaking these stories into tasks gave us an indication as to how long each story would actually take. We made sure to include every kind of task that would be needed to say that the story was really and truly done including design, documentation, coding, and testing of all flavours. Finally we imagined picking a combination of stories of different sizes so that we could finish each of them in the time we had for an iteration. Adding up the Story Points the stories in the combination gave us a point estimate of velocity.

We would have been foolish if we'd gone forward using just that estimate. So we took advice from Steve McConnell's Cone of Uncertainty (which shows how far out an estimate is likely to be at each stage in a project) and applied a multiplier to our point estimate of velocity to get a minimum and maximum. Since this was getting too much for our little brains to handle, we fired up Excel, and made a pretty spreadsheet of the minimum, expected and maximum number of iterations that we predict the project will take.

Unexpected Benefit

The final estimate was far larger than we'd expected (who is surprised?). So we needed to par back the scope. If we were basing estimates on a functional specification we could have cut whole features, but it would have been quite difficult to cut parts of a dialog box. But since we were using Stories, we were able to remove them from the release plan, then, simply by subtracting their Story Point estimate from the total, have an easy way to see the impact on the overall project.

At the end of all that, I'm happy. My feeling is that it is the most robust estimate and plan of any I've produced, and for once, we'll have a trivial way of reliably updating the plan as we see how we're getting on.

Thursday 2 October 2008

What Daddy does at work - the family's view

My wife overheard somebody asking our three-year old daughter about what Daddy does at work.

"He eats his lunch, and fixes bugs", was her reply.

Mind you, my wife's view of my work isn't much more exalted. For a long time she was under the impression that I worked with a product called Seagull Server!