JonoWhttp://tech-journals.com/jonow
Even in a well configured web-app that caches all static content appropriately, the vast majority of sites go back to the server to retrieve the HTML for a page every time. This seems like a flaw to me, as we are continually serving elements in the page that don’t change often and could be cached; but currently you can’t - it’s an all or nothing affair. So if there is something dynamic on the page, back to the server we go. I wish we could do this:

It seems nuts that we treat all the elements on the page as a single unit. The contents of my header, article and footer may have different cacheability requirements; for instance my header may contain my name, which means that only I should be able to cache it, whilst an article could be cached for a few hours, whilst my footer changes infrequently and could be cached for a longer time. Our requests in the above example could have cache-control headers like this:

/header
Cache-Control: private, max-age=600

/articles/1
Cache-Control: public, max-age=7200

/footer
Cache-Control: public, max-age=86400

Now that that HTML page mainly contains placeholder elements, it could be cached too, as it doesn’t contain any user specific data. This is essentially donut caching – we’re caching all the bits around the bit that changes (in our example, the <header>)

To speed things up some more, if you route all your traffic through a CDN like Akamai, then you increase the chance of a first time visitor being served content from a cache (one of their edge servers, potentially located near them), meaning less traffic on your servers and a better experience for users. Even without a CDN, using a reverse proxy like nginx can help reduce load on your web-servers.

Can’t we just use IFrames?

No, this wouldn’t be the same thing. An IFrame is a completely isolated document hosted within another document, with it’s own DOM. What I want here is that all the composed elements forms a single document, with one DOM. If this was standardised, search engines could also treat all the content as one, i.e. it would fetch all the dependent content and treat the merged result as a single page.

$(document).ready(function(){
// Find all elements that have a data-src attribute
var elements = $("[data-src]");
// For each one, perform a GET request and replace the element with the HTML in the response
elements.each(function(){
var element = $(this);
$.get(element.data("src"), function(html){
element.replaceWith(html);
});
});
});

This approach works, but has some down-sides:

A user without JavaScript will just see blank page.

A search engine will just see (and index) a blank page

To cater for both non-JS users and search engines, you would have to detect them and return a single HTML document in the traditional method.

The $(document).ready() event would loose it’s meaning. If you controlled every bit of JS on the page you could work around this by firing an event when all the dependant elements have been received, but if you used any external scripts or plugins that used $(document).ready, they would break.

I really do think this would be better handled by browsers in a standard way, and in a way to maintain backward compatibility. It’s in the spirit of the web, embraces http cacheability and would speed up the web by making more of the bytes that fly around cacheable.

]]>http://tech-journals.com/jonow/2012/08/30/html6-should-have-composeable-elements
http://tech-journals.com/jonow/2012/08/30/html6-should-have-composeable-elementsThu, 30 Aug 2012 21:32:42 UThttp://tech-journals.com/jonow/2012/08/30/html6-should-have-composeable-elements#commentsI recently needed to expose some functionality using XML-RPC services and looking around the web, the most popular .Net library is www.xml-rpc.net which looks pretty good. However they implement their services using a custom HTTP Handler, and I had really hoped to find one based on ASP.NET MVC. Why would I would to do this? I’d like to be able to use some the infrastructure I already have in place with MVC, e.g. caching, action filters, dependency injection via the controller factory etc. Well, I couldn’t find anyone doing anything like this, so I hacked together something of my own.

So just a quick recap, an example of an XML-RPC request looks like this:

First up is routing, normally the URL is where the controller/action values are extracted from, but we want to extract them from the XML in the POST body, specifically from the /methodCall/methodName node, and use the convention that the name before the period is the controller, and the name after is the action. We can do this with a custom route by inheriting from System.Web.Routing.Route and overriding the default GetRouteData method to set the RouteData for the target controller and action:

At this point, XML-RPC requests will be routed to the correct controller and action, but the parameters now need to be bound to the target method. I was originally planning to use a custom ModelBinder, but I didn’t want to add a global model binder, nor did I want to override the default model binder for each parameter in the target method. So a simpler approach was just to use a custom ActionFilterAttribute. This attribute inherits from System.Web.Mvc.ActionFilterAttribute, which means its OnActionExecuting method is called before the action is executed. This gives us a chance to set the parameters of the method. To do this, we need to deserialise the XML-RPC parameters into their appropriate CLR types. All we need to do is apply this attribute to our controller classes. Here’s the code:

We’re making progress now, our action method is now being invoked with it’s parameters correctly initialised to those specified in the XML-RPC request. We can now implement our action method normally. However we still need to deal with serialising any return value back into the XML-RPC response format. Ideally I would have liked to simply return an object from the action and have that serialised, however by default ASP.MVC uses a ContentResult for action methods that return values that are not ActionResult instances, meaning it just calls the ToString() method. My solution was to create an XmlRpcResponseResult type that inherits from ContentResult to serialise the return value into an XML string:

Before I go on, you may notice that this class knows how to serialise exceptions – it serialises them into XML-RPC faults. You may have noticed that the XmlRpcServiceAttribute filter also has an OnActionExecuted method, which executes after the action has run. It checks if an unhandled exception was thrown and changes the result so that the fault response is returned.

So with all this plumbing in place, what does a real example look like? Well all of this was to implement the MetaWeblog API for my blog, so here’s what the method looks like to get recent posts:

I think this is quite clean, and it looks pretty much like any other controller.

I hope this has been useful to some, do bear in mind that the code above is far from being complete. There are some XML-RPC types I didn’t get around to dealing with (e.g <base64> – essentially a byte array) and I’m sure the error handling could be better. If you have any suggestions, do drop me a comment below. Thanks for reading.

]]>http://tech-journals.com/jonow/2012/01/25/implementing-xml-rpc-services-with-asp-net-mvc
http://tech-journals.com/jonow/2012/01/25/implementing-xml-rpc-services-with-asp-net-mvcWed, 25 Jan 2012 00:29:14 UThttp://tech-journals.com/jonow/2012/01/25/implementing-xml-rpc-services-with-asp-net-mvc#commentsASP.NET session state is an undeniably useful tool for dealing with the statelessness of http. But there are draw backs that many developers may not appreciate.

The first issue we'll look at is one that a lot developers don't know about; by default the ASP.NET pipeline will not process requests belonging to the same session concurrently. It serialises them, i.e. it queues them in the order that they were received
so that they are processed serially rather than in parallel. This means that if a request is in progress and another request from the same session arrives, it will be queued to only begin executing when the first
request has finished. Why does ASP.NET do this? For concurrency control, so that multiple requests (i.e. multiple threads) do not read and write to session state in an inconsistent way.

So what sort of scenarios could produce concurrent requests from the same session?

A user opening multiple tabs/windows

Multiple concurrent asynchronous AJAX requests

Http handlers that stream resources like images - your browser may make concurrent requests for these.

I would guess that the most common case is that of a page making multiple AJAX requests, that the developer assumes will run concurrently. It's common for a developer to use multiple asynchronous requests when the operations
behind these are I/O bound (like waiting for data from a remote server), as you can increase performance by having your server do more work while it's waiting. But if you don't take into account session state, your requests may effectively become synchronous (in terms of total
execution time). Let's look at a concrete example; a page making 3 asynchronous AJAX requests to the server, with session state enabled (also note that session must actually be used, as ASP.NET is smart enough not to serialise
requests if you never use session state, even if it's enabled):

You can see the effect of serialised requests in the network profile; each request takes roughly 500ms longer than the previous one. So it
means we're not getting any benefit from making these AJAX calls asynchronously. Let's look at the profile again with session state disabled for our AjaxTestController (using the [SessionState] attribute).

Much better! You can see how the 3 requests are being processed in parallel, and take a total of 500ms to complete, rather than 1500ms we saw in our first example.

But what do you do if you actually need session state in your AJAX requests? If you only need to read session data, you're in luck; you can use [SessionState(SessionStateBehavior.ReadOnly)] which gives you read access
without any requests being blocked (although they will be blocked if there is a concurrent request set to read-write mode). But if you need to write session data and you're expecting these requests to run concurrently,
you need to ask yourself whether you need concurrency control, or whether it's unimportant. If you do need concurrency control, it may not be enough just to rely on the default behaviour, as these controls don't hold up in a
web farm environment - read on.

If you run off a web farm, you've probably set your session state mode to StateServer or SqlServer so that requests can go to any server and still maintain the same session state. However, the behaviour of serialising
same-session requests only applies for requests to the same server. So in our example above, it would be possible for the 3 AJAX requests to be processed concurrently by 3 different servers, even if session state is enabled.
Is this a good thing? I'd say no, because it becomes undefined as to whether the concurrency controls are in affect or not, depending on which servers process the requests, because of load-balancing. If all 3 requests are
processed on different servers, then no concurrency controls are in place, and the sequence of reads and writes to your session data can be inconsistent.

These concurrency problems are actually exacerbated by the way in which session data is read and written from its data store (no matter if you're using InProc,StateServer or SqlServer mode). Before your request is processed, ASP.NET loads all session data
into the session dictionary, then processes your request and then writes back to the data store before the response is sent. So when you write to session, e.g. Session["SomeKey"] = "SomeValue", the data is not persisted immediately, its only
written later. This means that the delay between reading and writing your session data is as long as your request takes to execute. So if your request is connecting to and waiting for a remote server, the delay could be significant. This increases
the probability of concurrency problems, because it means it's more likely that the session data is stale by the time your request sees it (i.e. another request has written data to the session since you read it), or you are overwriting an earlier modification made by a different request.
These are common problems we deal with in our app, where we use things like optimistic or pessimistic locking to solve them. But concurrency issues with session state are usually overlooked.

Am I suggesting you stop using session state entirely? Not quite, I just want to highlight the potential pitfalls. The severity of the problem depends how you use session, and based on that, how serious concurrency issues can be to your system. I would suggest avoiding session
state as much as possible, so that it gives you more opportunity to disable it on a per-controller basis. You could of course handle session data manually. Lets take a very common (and valid) use of session state; storing the contents of a users shopping basket in
an e-commerce app. Instead of storing the basket items in session, if you stored it manually in your database and retrieved it only when you needed it, you would get the benefit of reducing the amount of time that your basket data is held in memory, and thus reduce the likelihood of stale data. So looking at some
other common uses of session state, what are the alternatives?

Store currently logged in user in session - forms authentication already does this for you with an encrypted cookie. Go back to DB or cache if forms auth cookie doesn't have the data you need.

Store user preferences in session - use a cookie (as long as the amount of data is small and not sensitive)

Store data in session as user progresses through a wizard - this is usually a mistake and can lead to bugs, as you are assuming your user is only completing a single wizard in a single browser
tab/window. You could store all data in form fields and post all the data to the each step in the wizard.

So quite a bit to digest, but the basic moral of the story is to disable session state whenever it's not in use, and if it is in use, try specify read-only and read-write modes where applicable. Also keep an open
mind as to whether ASP.NET session state is even the right approach for you system.

]]>http://tech-journals.com/jonow/2011/10/22/the-downsides-of-asp-net-session-state
http://tech-journals.com/jonow/2011/10/22/the-downsides-of-asp-net-session-stateSat, 22 Oct 2011 19:16:00 UThttp://tech-journals.com/jonow/2011/10/22/the-downsides-of-asp-net-session-state#comments
If you read my first post, you may remember that I'm writing this blog platform as I go. You may
have also noticed that I never got around to implementing a spam filter. After all, who would want to spam little old me? Turns out, quite a few folks! Yes, the spam train has arrived. I've had to manually weed out thousands of spam comments to stop them appearing here. What to do?

Well the smart money would have been to use a 3rd party system like Askimet, but as I say, where's the fun in that? I am actually genuinely interested in the patterns of spam and how we can beat it, so I wanted to see if I could build a simple and effective filter. But I want to lay some ground-rules:

No CAPTCHAs, users hate them

No mandatory account registration or Facebook/Twitter integration

Must be unsupervised, I don't want to have to train a machine learning algorithm

Must be contextual to my blog; I'm not trying to write a new Askimet, it doesn't need to cater for generic content.

So I've built a very simple filter to try identify obviously valid or spammy comments by looking at both the content, and the user who's posting it. I think spam filters focus too much on the content and not enough on
analysing the behaviour of the user posting the comment, to see if they act as a normal user would. So some quick observations on the spam I get;

Valid commenters spend a reasonable amount of time on your site before commenting, maybe a few minutes. Spammers automate their comments.

Valid commenters might put their blog or twitter url in the "website" field without repeating it in their comment. Spammers often do.

Spammers usually come from IPs found to be spammy before

Spammers usually include multiple spammy words.

Spammers always put their links in anchor tags.

So the spam filter I've implemented basically looks at the observations above and rewards obviously valid comments, and punishes obviously spammy comments. It's based on a points system, with an additional
weighting on each rule to give more importance where applicable and results in a spam probability % which it uses to make a decision on whether the comment is spam or not. 0% means we're confident that
it's not spam, 100% means we're confident it is spam. Somewhere in the middle is a threshold %.

Does it work?

So far, the signs point to...YES! After a bit of tweaking with the scores and weights for each rule, it's stopped the vast majority of spam. What still gets through?

Manually entered spam - yup some people actually sit down and manually post spam on sites. How fun. This means my time-on-site metric is no good, as they sometimes spend a few minutes on the site. But luckily they're stupid enough to break most of the other rules and it often does flag them as spam.

Spam with no links - every so often you get spam that actually has no links in the content (and no spammy words). It could be an attempt to train machine learning algorithms to accept similar comments in the future (which will have links in no doubt) and maybe white-list their IP.

Spam in other (non-English) languages

How could I improve it?

Lots of ways! Some simple rules I still need to add:

Analysis of user interaction on the page (using JavaScript) - did the user move their mouse, press keys etc? Try reward commenters who behave in a normal way, e.g. spend a few minutes reading the post, typing the comment and then submitting.

Detect non-English comments

But my big idea is about challenging the logic that a spam check is black or white, that our system decides that it either is spam or is not spam. That's fine for the obvious cases, but there are plenty of times where the actual answer is "we're not sure".
How about having a tiered system based on our scoring, so that if we're not sure, we take additional steps to try determine if the content is spam or not. Say we decide that a score between 25% and 75% represents the "we're not sure" state. Some options on what additional steps we could take:

Go back to the user and ask them to submit a CAPTCHA. I know, I know, I said I don't want them, but the vast majority of users wouldn't see it, only ones who are pushing their luck with potentially spammy comments.

Follow the links that they include and analyse that content to see if it's spam. Black-listing of URLs/domains doesn't work because spammers use profile pages on popular sites like Flickr and Fotolog as a gateway page to their actual spam. But these pages are often filled with the normal spammy key words and might tell us what we need to know.

Email the user (using the address given in the comment form) asking them to confirm their comment by email.

Analyse known formats of comments. E.g. I often see spam in this form: {question}? <a href="{url-very-similar-to-one-supplied-as-commenter-website}">{spammy words}</a>

If any spammers are reading this and hatching a plan to thwart my efforts, I direct you to this obligatory XKCD comic.

]]>http://tech-journals.com/jonow/2011/10/04/building-a-diy-spam-filter
http://tech-journals.com/jonow/2011/10/04/building-a-diy-spam-filterTue, 04 Oct 2011 00:05:00 UThttp://tech-journals.com/jonow/2011/10/04/building-a-diy-spam-filter#comments
Unless you've been sleeping under a rock, you would have noticed the new trend toward hosting web-apps on PaaS platforms like
Heroku, AppEngine
and Force.com - aka "The Cloud". These platforms allow you to deploy your apps to an environment that's somewhere between shared hosting and a VPS (virtual private server), but charged by usage rather than a regular monthly fee. However their killer feature
is that they allow you to scale up nearly instantly. So if you app makes it onto the front page of reddit, instead of your VPS melting and going into the foetal position, you can "spin up" more instances to handle the load.

Up until now the only viable option for running ASP.NET apps on a PaaS platform was Microsoft's own Windows Azure. Although I have no doubt
it is a powerful platform, the general consensus is that it can be quite complex and quite expensive. I'm sure Azure is great if you're building the next Facebook, but if you just want to deploy a simple web-app cheaply, it can seem like an overkill. Luckily, I've discovered AppHarbor.

AppHarbor's official strap-line is "Azure done right", but unofficially it's "Heroku for .NET", as it uses the same deployment method as Heroku, one which revolves around version control and continuous integration. It goes something like this; push code changes to your remote Git repository
on AppHarbor, it runs any unit tests in your solution, and if those pass, it deploys your app. Simple. The best part of AppHarbor is the price - free! For now, your can create a single instance app with a small database for zero dollars.
They are a young start-up and are not yet charging for more instances or larger databases (they will soon), but have said they intend to keep a free version available. Lets run through a step-by-step guide to deploying an existing app.

Deploying Your First App

The next page will give you the details for your Git repository that you'll need to push to for your app to be deployed. Your git repository URL should be https://{Username}@appharbor.com/{AppName}.git

Now back on your local dev environment, I'll assume you have your existing app bound to an existing Git repository. If you're new to Git, AppHarbor's support docs have some useful links;
Deploying your first application.

Back to AppHarbor, refresh your apps main page - you should see the progress of your build.

Assuming your build succeeds, your app should be available on the default URL: http://{AppName}.apphb.com/. Voilà, your app deployed within a few minutes, for free!

Creating a Database

If your app needs a database (currently only MSSQL and MySQL supported), go to your apps main page, at the bottom there is a section for databases; click "Add Database"

Choose the type of DB, the requested size, name, and create it.

The next page will give you the connection string to your DB. You can use your DB client tools (e.g. MSSQL Management Studio) to connect to it and configure it as needed.

Tips & Tricks

Use a Custom Domain

Go to https://appharbor.com/application/{AppName}/hostname/new and enter in the hostname you'd like to use for your app. Take note of the IP address they give you.

You can enter in multiple host names and set one of them as canonical, which means all requests to other hostnames will be redirected to the canonical hostname. So a common scenario is to set up two
hostnames, *.appname.com and www.appname.com (canonical), which means any request to appname.com that doesn't begin with www will be directed to the www URL.

Next you need to change your DNS settings; add an A record(s) pointing to the IP address noted earlier.

Coding Work-Arounds

Don't rely on your app being on port 80: Your instances do not run on port 80, a load-balancer takes requests and forwards them to the appropriate port on the server where your app is actually
hosted. The load-balancer does however forward the original host header. So if you have any code that builds up absolute URLs, you need to keep this in mind. There is an article on AppHarbor's support
site for their recommended workaround.

On every build, the entire app folder is deleted and recreated, so you can't store any user uploaded content within your site (e.g. your App_Data folder). A favoured approach is to
use Amazon S3 to store files; because AppHarbor is actually built on top of Amazon EC2 (US-East region), if you choose the US Standard region for your S3 storage, you won't pay transfer costs because they are in the same region.

]]>http://tech-journals.com/jonow/2011/06/08/getting-started-with-appharbor
http://tech-journals.com/jonow/2011/06/08/getting-started-with-appharborWed, 08 Jun 2011 21:49:00 UThttp://tech-journals.com/jonow/2011/06/08/getting-started-with-appharbor#comments
Using the standard [Authorize] filter in ASP.NET MVC results in "magic strings"; comma-separated role names to define which roles are authorised to access that action. Take an example of a typical Forms Authentication setup,
where you want to restrict an action to users in either the "Administrator" or "Assistant" role:

This isn't ideal, we might land up peppering our controllers with these string constants, meaning any change to the role names can't be easily refactored. We should at least declare constants for the role names and
re-use these in the [Authorize] filters so that we can use our refactoring tools:

Whilst this is better as we are avoiding magic strings, it just doesn't look clean with the string concatenation, especially if we have a lot of roles. It's tempting to try set the "Roles" property to a
value returned from a static method, but remember attributes can only contain constant expressions (which is why we have to use const strings for "AdministratorRole" and "AssistantRole"). We can do better.

If we only need to restrict access by role, as opposed to restricting access to particular users, we can use a simple and elegant solution by customising the standard [Authorise] filter
with a new constructor that takes in a variable number of role name arguments, which we convert into the comma-separated string that the base class uses:

]]>http://tech-journals.com/jonow/2011/05/19/avoiding-magic-strings-in-asp-net-mvc-authorize-filters
http://tech-journals.com/jonow/2011/05/19/avoiding-magic-strings-in-asp-net-mvc-authorize-filtersThu, 19 May 2011 23:58:00 UThttp://tech-journals.com/jonow/2011/05/19/avoiding-magic-strings-in-asp-net-mvc-authorize-filters#commentsAs a software developer, using CSS can sometimes be frustrating as it violates the DRY principles, with a typical CSS file containing a lot of duplication. Fortunately there are a suite of frameworks that can help;
CSS extension languages that compile down to standard CSS. LESS CSS is one such framework. A LESS CSS file looks quite similar to a normal CSS file, except that it adds some additional features to the CSS language, which gets complied
into standard CSS that your browser will understand. LESS CSS introduces four main features; variables, mixins, nested rules and operations. Let's have a quick look at each.

Variables

Variables allow you to define values in a single place, and re-use them elsewhere. You can use variables for any style value, whether it be a RGB colour code or a font-family declaration. Let's look at an example:

Mixins

Mixins allow you to re-use several sets of styles as a named unit, so that you can include whole classes into other classes. Even better, a mixin can take parameters so that they act much like a function. Let's look at an example:

Operations

This powerful feature lets you define style values with an expression, which means you can set style values based on numerical operations on one of your variables.
Rather usefully, you can also apply these operations to colour values, for instance you can define a colour variable that is 50% darker than another colour variable you have declared. An example:

Integrating with ASP.NET

So all of this is great, but how do I actually use this in an ASP.NET app? The official LESS CSS implementation is in Ruby, but there is a .NET port called dotLESS to make it easy to integrate with ASP.NET. You can download
the required assemblies/tools from the web-site, but it's probably easier to use NuGet to add the dotLESS package to your project. There are two ways that you integrate LESS CSS files into your app,
either you can use an HTTP Handler to compile your LESS CSS files on-the-fly, or you can set-up a post-build action to compile your LESS CSS files prior to deployment. Let's look check out both options:

1) HTTP Handler to compile LESS CSS files on-the-fly
Ensure you have added dotless.Core as a reference, and ensure that your web.config file contains the highlighted lines:

The above configuration will route any http request ending in ".less" coming into your site to run through the dotLESS compiler to be converted into standard CSS. We've also specified that the CSS should be minified and cached for maximum efficiency.

2) Post-build action to compile LESS CSS files before deployment
Given that CSS files rarely change after an app has been deployed, a good solution is to compile your LESS CSS files before deployment. The advantage to using the HTTP Handler method is that you land up with normal static
files, which IIS can serve more efficiently than running through a handler, and it also means that you files can be hosted externally, e.g. on a CDN. The simplest way to achieve this is by using a post-build action on your web project. dotLESS ships with a command line tool to compile LESS CSS; if you used NuGet to
add dotLESS to your solution, the executable can be found at {solutionPath}\packages\dotless.1.1.0\Tools\dotless.Compiler.exe).

So in Visual Studio, open the properties of your web project, go to the "Build Events" section, and the in the section "Post-build event command line", insert the following line:

Every time the project builds, this command will compile any .less file in the \content folder into a corresponding .css file, minifying it as well (with the -m switch).

During development this post-build approach can be frustrating as any changes you make to your .less files while your app is running are not reflected until you next build the project. Fortunately dotless.Compiler.exe has a -w switch which watches for changes
to your .less files, so you can make changes without having to rebuild.

TIP
You don't actually have to use the extension .less, you can use anything. Visual Studio doesn't know how to handle .less files, so it treats them as a standard text file. If you use the naming convention {filename}.less.css, then Visual Studio
will treat it as a standard CSS file and provide some formatting and intellisense. Sure it's not perfect as Visual Studio doesn't know how to handle the LESS specific features, but it's still better than nothing.

]]>http://tech-journals.com/jonow/2011/05/13/using-less-css-with-asp-net
http://tech-journals.com/jonow/2011/05/13/using-less-css-with-asp-netFri, 13 May 2011 22:41:00 UThttp://tech-journals.com/jonow/2011/05/13/using-less-css-with-asp-net#commentsHello there! I'm Jono Ward, a South African developer living in South-West London. I'm starting this blog to journal all the interesting things I learn everyday in my job as a software developer. I've decided to build my own blog engine as an excuse to try new technologies. I know what you're thinking, that I'm re-inventing the wheel, and you're absolutely right! Is this a wise thing to do, considering I could just install Wordpress in a few clicks, select a snazzy theme and start blogging nearly immediately? Probably not, but then I wouldn't get to build stuff :) Because I'll need a reasonable tool to blog with, it should give me the incentive (i.e. kick in the ass) I need to actually ship something. On the tech side, I'm basically a .NET guy, but I like what the Alt.NET community stands for, and I also want to dabble with new technologies, maybe Ruby and Python to get a feel for what it's like to build stuff in those environments.

It would be awesome to get feedback from anyone taking the time out to read my blog, so please feel free to comment on my posts, or if you'd like to get in touch, you can email me on jonoward[at]gmail.com