Remove any existing copy of the WordPress archive, download the latest version, and test it’s a valid gzip. I assume that if it’s a valid gzip, the TAR file inside will also be okay. The usage of readlink is because when I tried using wget --directory-prefix=~/var/wp-upgrade directly, wget created a directory called ~ inside the directory it was ran from.

cp ~/usr/wp-maintenance/.maintenance ~/blog-home/
echo * Site is now down for maintenance *

.maintenance is a PHP file that with appropriate code will cause a down for maintenance page to be shown when non-admin users visit the site. See the blog post series WordPress Maintenance Mode Without a Plugin for details. ~/usr/wp-maintenance/.maintenance contains the .maintenance code in the third part of the series. I also use a custom down for maintenance page as described in the second part of the series.

Nothing too exciting – the client setting applies to both MySQL commands, the add-drop-table config ensures that drop table statements are included in the MySQL backup.

echo * Disabling plugins. Unless you roll back, you will need to enable the plugins manually at the end of this process *
mysql --defaults-extra-file=~/usr/mysql-blog.cnf -e
"UPDATE wp_options SET option_value = 'a:0:{}' WHERE option_name = 'active_plugins';"
blog_database

This is taking the risk that WordPress could change the way plugin enabled state is stored in the database, but it’s convenient.

Remove the wp-includes and wp-admin directory entirely, and then unzip the new version of WordPress over the old version. The one piece that was tricky here was that the files in the TAR are all in the directory wordpress/, meaning that my first attempt resulted in the creation of ~/blog-home/wordpress. A bit of research found me the --strip parameter, which as used strips wordpress/ from the paths in the TAR.

When playing around with ASP.NET membership, I found myself in a situation where I wanted to mock the ASP.NET Providers. This is something the design of providers makes non-trivial. Mark Seemann summarises: “Since a Provider creates instances of interfaces based on XML configuration and Activator.CreateInstance, there’s no way to inject a dynamic mock.”. See Provider is not a pattern.

I had a look around to see what others were doing. I found a post, Mocking membership provider, which proposes adding mocked providers to the provider collection dynamically. It seems like an elegant solution, but I couldn’t get it to work for me after a little playing.

In the end, I came up with a solution that is not the most elegant, but is very easy to use and to understand.

I create an implementation of each provider I want to. The provider contains a mock of that provider type. Each method and property of my provider implementation forwards to the mock the implementation provides. The mock is accessible via a static method of the provider implementation, so that test code can interact with it.

Note the static methods controlling the mock at the top. Note also that I’ve simply implemented all methods and properties of RoleProvider as not implemented using Visual Studio tooling, and then updated the implementations to forward calls to my mock as I need.

Wiring up the provider framework to use this implementation is easy. Just add the following config to the app.config of your unit test project:

I followed with some interest the debate around the “mass assignment vulnerability” recently reported in Rails. I dislike the way the whole debate is couched assuming access control at the model level. When you are stating that an attribute cannot be assigned, you are stating that to assign this attribute from the controller, you can’t use the normal update_attributes method, and must update it explicitly.

I don’t like the laziness this implies. I believe we should embrace practices that encourage explicit thought about the attributes a controller method can update – encouraging developers to build in an intentional fashion, and consider exactly how each method they build behaves. The role based security in Rails 3.1 is alright. But I still think the definition of which attributes a controller method updates should be made explicit. It isn’t just a security concern. The attributes a controller method can take as input to update a model should be part of the contract of that method, in my opinion. It aids understanding.

I far prefer the ideas of the View Model and Model In Model Out. The input a controller can receive, and the output that is rendered to the view is both explicitly modeled (and therefore documented), and independent of the underlying model. Using a tool such as Automapper can prevent the noise mapping View Model to Model can introduce.

I just went to post a comment on a WordPress.com hosted blog. I was going to use my Twitter account to log in, and in order to authenticate with my Twitter I was asked to allow WordPress to:

Read Tweets from your timeline.

See who you follow, and follow new people.

Update your profile.

Post Tweets for you.

This appears to be because the same authentication is used when you are a blog author and wish to allow WordPress to “Tweet your WordPress.com posts.”.

Come on WordPress. Surely you’ve got the resources and savvy to provide different levels of authentication for bloggers and for commenters? As a commenter identity is the only issue, and the authentication process should ask for no rights whatsoever, beyond being able to read my email address and name. Cf the Principle of least privilege. Lame.

I want my EF POCOs to implement IUserNameStamped if they have a UserName property, and ILookup if they have a Code and Description property. I want the IUserNameStamped code in a file IUserNameStamped.cs, and ILookup code in a file ILookup.cs.

By default, a T4 template will generate a single file with the same name as the the template, and the extension defined by the <#@ output #> directive. The EntityFrameworkTemplateFileManager, used by EF to generate a file per entity, is the secret to generating multiple files from a single template.

The other change needed to the T4 code we already have is to break it into reusable methods that can be shared for each entity.

The method I’ve defined to generate an file for a given interface is CreateInterfaceFile, shown here with support classes.

Duck typing is an interesting concept, and alien to C# generally. But using the techniques of my previous post about T4 and Entity Framework, it is possible to have your entities implement interfaces if they have the required properties, resulting in behaviour similar to duck typing. Please read the previous blog post before reading this one.

The previous blog post gives us code to implement interfaces for each entity in an object model. In order to provide “duck typing”, we will extend this to only implement the interface for an entity if that entity has the properties of the interface.

Fortunately System.Data.Metadata.Edm.EntityType gives us the ability to inspect the properties of an entity. For my purposes, I only check for properties by name, as I control my database and would never have the same column name with two different data types. Extension of this code to check property types as well as names is left as an exercise for the reader.

Pretty simple stuff. EntityHasPropertyOrRelationship checks both the Properties (properties relating to simply database columns), and NavigationProperties (properties relating to foreign key relationships) for properties with the required names. If our entity has all the required properties, it’s a match.

GetEntitiesWithPropertyOrRelationship uses EntityHasPropertyOrRelationship to retrieve all the entities that have the required properies from our itemCollection.