My biggest grief with WPF is the way how twisted XAML works. It takes a lot of time getting used to it, and it’s by no means obvious how to do stuff.

So this is the story of how to change the appearance of a control from some common resource for controls.

Styles and Setters

Most of the things you’d consider styling (as known from CSS) can be done via Styles that contain setters.

Inside a ResourceDictionary you can define styles for all controls of a type, or have them be applicable by a key. This is pretty similar to CSS where you can apply settings either via class (by key in WPF) or by elementType (TargetType in WPF)

As you can see, all you do inside a <Setter> is pretty much the same as you would apply to the element itself in markup. If you have non-trivial values to set you can use the as you can do with almost all things in XAML.

This is actually an interesting concept that took some getting used to. In XAML you can specify each and any attribute of the element by using the <Element.AttributeName> syntax in case it’s not expressable by a simple literal value.

Now let’s assume I want to change the appearance of a TextBox to have a flat border. If you come from WinForms like me you’ll spend a fair amount of time hunting for a BorderStyle property that’s simply not existant in WPF.

The way to go here is: You guessed it, Control Templates.

The ControlTemplate allows you to replace the visual tree of your object and roll your own that suits your needs (makes it powerful, but rather hard to figure out). My naive approach to this was to simply define the border in WPF like this:

What happened was that I turned my TextBox into a border. No text entering, nothing. I changed the visual tree of the object to only contain a border, and that’s what I got.

It then took me a little while to figure out that you can’t simply change the tree without putting back in an element to write the text to. Apparently for textboxes you have to put in a ScrollViewer named PART_ContentHost.

This then gives us a nice flat border with 1 pixel thickness and a nice shade of gray instead of the gradient brush WPF uses by default.

Triggers

Another very important concept (at least to me) was triggers. Assuming we made our control look pretty, we may want to change it’s appearance once you mouseover it or focus it.

If you try like me using the VisualFocusStyle to do that you’ll soon notice that VisualFocusStyles are not applied when a user clicks the element, only when he navigates to it by keyboard. It was meant to provide UI hints for keyboard navigation, not to indicate focusing in any way.

The correct way to do stuff like OnFocus and OnMouseOver is by using Triggers. With triggers you can do something whenever a property matches a certain value. So in our case we can create a trigger that changes the border color whenever the IsFocused property contains the value True.

This MultiTrigger will only apply the black BorderBrush if IsFocused is false and IsMouseOver is true. Thus making sure the IsFocus style is not overridden and flickering whenever the user waves the mouse around.

DataTriggers

Triggers are only good for one thing: Firing if properties on the Visual Tree change. If you want to do something when something in your DataContext changes you need DataTriggers. These work by defining a Binding and a Value, but in essence work pretty much like normal Triggers, only that they work on anything you can bind to.

DataTriggers also come along in MultiDataTrigger and DataTrigger flavors.

DataTriggers are also the answer to most questions concerning MVVM asking “How can I show X from my ViewModel.”. The answer here is usually: "Define some boolean property (with INotifyPropertyChanged) and set up a DataTrigger to it. (This works for starting animations etc.. Everything.. This is crazy powerful!)

And that’s about all the stuff I spent most of my morning figuring out. I hope you get any useful information out of it. Because I quickly became frustrated with the content that’s out there.

After some trouble with our previous hoster the old domain www.dotlesscss.com went offline.
Chris tried to resolve this issue, but things didn’t work out so the website was offline for almost 2 months now.

We where not completely offline since, I migrated the website to GitHub Pages some time ago and all documentation was moved over to the new GitHub Wiki. But we were still missing our shiny domain name we could point people to.

So I finally decided to give up on waiting for our old domain to come back and registered a new one:

What’s the default target platform on the .NET environment? Yep, it’s AnyCPU, meaning that a flag is set in your assembly file that specifies “this IL code can be either compiled into x64 or x86”, and the executing .NET framework will then decide what platform to choose based on the OS that’s running.

If you are running on a Win7 x64 machine you’ll execute a native x64 application, if you run on a 32 bit Windows you’ll run a 32 bit application.

When in doubt you’ll want to default to AnyCPU and only switch to x86 or x64 explicitly if you are doing some native P/Invoke or other weird stuff. So for 90% of developers AnyCPU is the perfect target platform and they don’t even know what goes on behind the scenes. They just compile once and run on any Windows with a .NET framework.

Now they screwed it up!

Whenever you create a new Project in VS2008 it will default to AnyCPU, everyone is happy.

Fast forward, VS2010 comes along and changes this default behavior:
In Visual Studio all class libraries default to AnyCPU, but all WPF Applications and Console Applications default to x86

Do you see the implications here? You start out with a clean AnyCPU solution like this one:

Once you add a Console Application you end up with:

As you can see, for no good reason (it’s only two blank projects after all) Visual Studio changes the platform to Mixed and sets the ConsoleApplication to x86!

Why is this so bad?

Well, if you are running on a x64 machine, and try to run unit tests inside your library project (as most of us do) you end up running a x64 process that is linked to a x86 assembly!

You don’t need to be a genius to figure out that this is not going to work. You’ll see a System.BadImageFormatException pop up with no good explanation. (Apparently the Platform dropdown in the Toolbar was also removed in VS2010!)

To provide some context, the motivation for the change is to alleviate the pain that customers are feeling when developing on a 64-bit OS – some examples being Edit and Continue (EnC) and P/Invoke scenarios. To elaborate on the EnC example, EnC is supported on a 64-bit OS provided that you’re debugging a 32-bit process. However, by defaulting to AnyCPU, processes are automatically run as 64-bit, which means that EnC will not work unless you change the platform target to be x86

Have you ever used Edit and Continue? Hell I mostly run without a debugger attached and they are doing stupid things like that because they want to allow me to change code while I debug a program. Are you kidding me?

Not to mention the hilarious post quoted that tries to argue that AnyCPU is usually not worth it. Listing reasons 99% of programmers don’t care about at all.

Just to round this up: If you compile something to x86, you can still run it on a x64 Windows. Windows features a x86 “emulation layer” called Windows on Windows (it’s inside your C:\Windows\SysWOW64 folder) that allows you to run x86 applications on a 64 bit operating system. It works, but why would I want an abstraction layer if I could just as easily run a native 64 bit application on my 64 bit OS?
We are talking about a change of runtime characteristics for a tiny facilitation in development, and that’s not good in my opinion.

I have no idea why, but although having been a Windows user for most of my career, I know the unix commandline pretty well. In fact, one of the best things in Powershell was the ls alias to the Get-ChildItem command.

Naturally, Microsoft could not include an alias for every unix command out there, so I spend a fair amount of time hunting down the Powershell equivalents to Unix commands whenever I need one.

This time it’s the time command that allows you to measure how long the execution of a particular command took. The Powershell equivalent is called Measure-Command and does exactly the same thing, returning a System.TimeSpan.

For example, to measure the execution time of a git checkout:

Measure-Command { git checkout gh-pages }

Switched to branch ‘gh-pages’

Days : 0

Hours : 0

Minutes : 0

Seconds : 0

Milliseconds : 344

Ticks : 3448544

TotalDays : 3,99137037037037E-06

TotalHours : 9,57928888888889E-05

TotalMinutes : 0,00574757333333333

TotalSeconds : 0,3448544

TotalMilliseconds : 344,8544

I considered creating an alias for Mesaure-Command to just time, but the usages are so rare that it’s not really necessary.

Believe it or not, my first encounter with distributed version control was through Mercurial. At the time I was used to SVN and Mercurial (hg) opened my eyes to the painless ways of the DVCS. And while I was very content with the way Mercurial worked, I still had to make sacrifices to the way the tool worked.

After seeing some .NETOSS projects on GitHub, and following GitHub and BitBucket for some time it became clear that GitHub is much more active than BitBucket. So I decided to try out Git and see for myself why everyone is making a fuss about GitHub, and behold: I liked it!

While hg pretty much feels like Subversion but more awesome, Git manages to loose that feeling. Especially the flexibility that git allows through it’s index made it immediately feel “right”. Here is why:

Forgettable me

Sadly, I forget things all the time. And this is especially true while working. Sometimes I commit something and forget to save my Visual Studio project, so my commit only contains loose files. In hg, like in SVN, you are screwed and have to commit a second time to make things right. It’s embarrassing when others see your history littered with “Doh, forget to save XY” commits.

Git solved this with it’s git commit --amend that allows me to add stuff to the previous commit with no hassle. Hg treats all commits as immutable, and that’s just not how I tend to work. I’m stupid, and my tools have to reflect that!

Caveat: You should NEVER mess around with published history. If you amend a commit that is already in a public repository, everyone that pulled that commit will have a very hard time merging with any upstream changes you start to introduce. Just don’t do it, once you push you are committed!

Staging

Same story, different setting. Besides forgetting to commit something, I sometimes even forget to commit at all. You get into the flow of things, and once you look up you finished 3 things that better go into individual commits. With git you can simply add individiual files (or even individual lines inside the files) to the index, commit, repeat until no uncommitted files are left.

This feature is awesome, and it allows a great deal of flexibility without having to think about “damn am I doing too much for one commit?”. It’s like above: I don’t have to think how the tool wants me to behave, I can make my tool behave like I need it do easily.

Mercurial can also do this, but you end up with hg commit --include fileA fileB ... and that’s just not as simple as the Git index.

And that’s just all about it. I went with hg first because people told me that it’s better on Windows, but when I moved to Git afterwards I didn’t notice any problems. Speed is pretty much the same, so you can just pick and choose what you like regardless of platform.

I simply chose Git because it does not interfere with the way I work, and because it makes it easy to fix my screwups afterwards.

The title says it all, I’ve finally moved my Blog off Wordpress and replaced it by static HTML files generated by Jekyll

This means: No more security updates to Wordpress, no database roundtrip to display what amounts to static HTML content and best of all: I can version my posts using Git!
Naturally, the code is now up on GitHub in it’s own repository
Another added benefit is that I can now write my posts using textile instead of HTML, what amounts to a much better editing experience.

What changed for my readers? Nothing, and that’s not really a good thing. I feel like this blog would benefit from a cosmetic overhaul, but until I can convince a designer to help me out with that it will look and feel the same it has always done (boring). I did however clean out the sidebar a bit :)

The migration process

Oh this is where things get interesting. Migrating to Jekyll was not as easy as I thought it should be. Since my hoster does not allow for direct connections to the MySQL database using the migrator scripts within Jekyll was not an option. These require direct database access and I was not very keen on installing MySQL on my machine just for that purpose.

It was a journey from there, I first tried to export MSSQL compatible SQL but suffice it to say phpMyAdmin sucks. (It was also unable to generate sane YAML). I finally found the export function inside Wordpress itself that will generate a RSS style XML document that contains all posts and decided to simply write a Ruby script that exports the posts from the XML into html files for use in Jekyll.

Writing that script was also a bit funny, since Ruby/Jekyll fails totally at reading YAML Front Matter in any format besides ANSI. So I had to find out how to first write a file in ANSI using ruby (it defaulted to UTF-8) and then how to htmlencode text using htmlentities. Anyway, overall I have to say writing that Ruby script was really fun and I may end up doing a bit more Ruby in the future.
And btw: I finally managed to learn how to get around Vim. Did I mention that this editor totally rocks?

In the end, the script is still rather simplistic. It will just take a wordpress.xml file and extract all posts into the _posts directory, adding categories, title etc to the YAML Front Matter of each post. You can find it in my GitHub repository: Importer.rb

I’ll put up a series of blogposts about the Wordpress → Jekyll migration in the near future.

Update: Sorry to all my feed subscribers for the spam there. I changed the format of the feed to atom and now your reader may show 20 unread posts when there is really only one.