Project Description
This is a WPF library containing a powerhouse of controls, frameworks, helpers, tools, etc. for productive WPF development.
If you have ever heard of Drag and Drop with Attached properties, ElementFlow, GlassWindow, this is the library that will contain all such goodies.
Here is the introductory blog post

At this time the library is in a Source Only form and requires .Net Framework 3.5 SP1 or later. To build this project on your machine, you need to have
VS2010.

Alright, this blog has been quiet for a few months. But that doesn’t mean that I have stopped writing.

On the contrary, I am doing more of it as a contributing author at
NetTuts+. The topics are quite varying but are all related to Web Development in one form or other. A sampling of my articles so far include:

Thanks to my editor, Jeffrey Way, I was also given the opportunity to create a Video course on the latest JS technologies like NodeJS, MongoDB, EmberJS, RequireJS, etc. This should be live soon and
I’ll tweet the link once it is prime.

So, if you find this place a little quiet, be sure to check out
NetTuts+.

A seemingly simple language yet a tangled mess of complexity. If you are picturing a giant CSS file from your website, you are on the right track. Yes, CSS can start out as a really simple language to learn but can be hard to master. The CSS chaos starts
slowly and seems innocuous at first. Overtime as you accumulate features and more variations on your website, you see the CSS explode and you are soon fighting with the spaghetti monster.

Luckily this complexity can be brought under control. By following a few simple rules, you can bring order and structure to your growing pile of CSS rules.

These rules, as laid down by Scalable Modular Architecture for CSS (SMACSS), have a guiding philosophy:

Do one thing well

Be context-free (as far as possible)

Think in terms of the entire website/system instead of a single page

Separate layout from style

Isolate the major concerns for a webpage into layout, modules and
states

Follow naming conventions

Be consistent

SMACSS in action

The above principles can be translated in the following ways:

Avoid id-selectors since you can only have one ID on a page. Rely on class, attribute and pseudo selectors

Avoid namespacing classes under an ID. Doing so limits those rules only to that section of the page. If the same rules needs to be applied on other sections, you will end up adding more selectors to the rule. This seems harmless at the
outset but soon becomes a habit. Avoid it with vengeance.

Modules help in isolating pieces of content on the page. Modules are identified by classes and can be extended with sub-modules. By relying on the fact that you can apply multiple classes to a HTML tag, you can mix rules from modules and
sub-modules into a tag.

The page starts out as a big layout container, which is then broken down into smaller
layout containers such as header, footer,
navigation, sidebar, content. This can go as deep as you wish. For example, the
content area will be broken down further on most websites. When defining a layout rule make sure you don’t mix presentation rules such as fonts, colors, backgrounds or borders. Layout rules should only contain box-model properties like margins,
padding, positioning, width, height, etc.,

The content inside a layout container is described via modules. Modules
can change containers but always retain their default style. Variations in modules are handled as
states and sub-modules. States are applied via class selectors, pseudo selectors or attribute selectors. Sub-modules are handled purely via class selectors.

Naming conventions such as below make it easier to identify the type of rule:
layout, module, sub-module or state

layout: .l-*

state: .is-*

module: .<name>

sub module: .<name> .<name>-<state>

Be conscious of Depth of applicability. Making the rule deeply nested will tie the CSS to your HTML structure making it harder to reuse and increasing duplicate rules.

An example to tie it all together

Alright, there are lot of abstract ideas in here. Let’s do something concrete and build a simple webpage that needs to show a bunch of contact cards, like below:

Parallels to OO languages

To me the whole idea of SMACSS seems like an application of some of the ideas from OO languages. Here is a quick comparison:

Minimize or avoid Singletons: minimize or avoid #id selectors

Instances: tags in html which have a class applied

Single inheritance: Modules and Sub-modules

Mixins: context free rules via states and layouts

Summary

SMACSS can save you a lot of maintenance headache by following few simple rules. It may seem a little alien at first but after you do a simple project, it will become more natural. In the end,
its all about increasing productivity and having a worry-free sleep ;-)

It reads a little better and can almost look like real-html with the identation ;)

Minimize use of if/else blocks by creating object hashes

Lets say you want perform a bunch of different actions based on the value of a certain parameter. For example, if you want to show different views based on the weather condition received via an AJAX request, you could do something like below:

<figure class='code'><figcaption></figcaption>

123456789101112131415161718192021222324

functionshowView(type){if(_.isObject(type)){// read object structure and prepare view}elseif(_.isString(type)){// validate string and show the view}}functionshowWeatherView(condition){if(condition==='sunny')showView('sunny-01');elseif(condition==='partly sunny')showView('sunny-02');elseif(condition==='cloudy')showView('cloudy-01');elseif(condition==='rain')showView({type:'rain-01',style:'dark'})}$.get('http://myapp.com/weather/today',function(response){varcondition=response.condition;// Show view based on this conditionshowWeatherView(condition);});

</figure>

You will notice in showWeatherView(), there is lot of imperative noise with if/else statements. This can be removed with an object hash:

If you want to support more views, it should be easier to add it to the
viewMap hash. The general idea is to look at a piece of code and think in terms of
data + code. What part is pure data and what part is pure code. If you can make the separation, you can easily capture the
data part as an object-hash and write simple code to loop/process the data.

As a side note, if you want to eliminate the use of if/else,
switch statements, you can have Haskell-style pattern-matching with the
matches library.

Make the parameter value be of any-type

When you are building a simple utility library/module, it is good to expose an option that can be any of
string, number, array or function type. This makes the option more versatile and allows for some logic to be executed each time the option value is needed. I first saw this pattern used in libraries like
HighCharts and
SlickGrid and found it very natural.

Let’s say you want to build a simple formatter. It can accept a string to be formatted using one of the pre-defined formats or use a custom formatter. It can also apply a chain of formatters, when passed as an array. You can have the API for the formatter
as below:

<figure class='code'><figcaption></figcaption>

12345678910111213141516171819202122232425262728293031

functionformat(formatter,value){varknownFormatters={'###,#':function(value){},'mm/dd/yyyy':function(value){},'HH:MM:ss':function(value){}},formattedValue=value;if(_.isString(formatter)){// Lookup the formatter from list of known formattersformattedValue=knownFormatters[formatter](value);}elseif(_.isFunction(formatter)){formattedValue=formatter(value);}elseif(_.isArray(formatter)){// This could be a chain of formattersformattedValue=value;_.each(formatter,function(f){formattedValue=format(f,formattedValue);// Note the recursive use format()});}returnformattedValue;}

</figure>

As an addendum to a multi-type parameter, it is also common to normalize the parameter value to an object hash and remove type differences.

Use IIFE to compute on the fly

Sometimes you just need a little bit of code to set the value of an option. You can either do it by computing the value separately or do it inline by writing an
Immediately Invoked Function Expression(IIFE):

The ExpressJS framework is one of the simpler yet very powerful web frameworks for NodeJS.
It provides a simple way to expose GET / POST endpoints on your web application, which then serves
the appropriate response. Getting started with ExpressJS is easy and the
Guides on theExpressJS website are very well written to make you effective in short order.

Moving towards a flexible app structure

When you have a simple app with a few endpoints, it is easy to keep everything
self-contained right inside of the top-level app.js. However as you start
buliding up more GET / POST endpoints, you need to have an organization scheme
to help you manage the complexity. As a simple rule,

When things get bigger, they need to be made smaller ;-)

Fortunately, several smart folks have figured this out earlier and have come up
with approaches that are wildly successful. Yes, I am talking about Rails and
the principle of “Convention over Configuration”. So lets apply them to our
constantly growing app.

Route management

Most of the routes (aka restful endpoints) that you
expose on your app can be logically grouped together, based on a feature. For
example, if you have some endpoints such as:

/login

/login/signup

/login/signup/success

/login/lostpassword

/login/forgotusername

… you can try grouping them under the “login” feature. Similarly you may have other endpoints
dedicated to handle other workflows in your app, like uploading content, creating users, editing
content, etc. These kind of routes naturally fit into a group and that’s the first cue for
breaking them apart. As a first step, you can put the logically related
GET / POST endpoints in
their own file, eg: login.js. Since you may have several groups of routes, you will end up with
lots of route files.

Putting all of these files at the top-level is definitely going to cause a
clutter. So to simplify this further, put all of these files into a sub-folder, eg: /routes. The project structure now looks more clean:

Since we are working with NodeJS, each file becomes a module and the objects in the module can be
exposed via the exports object. We can establish a simple protocol that each route module must
have an init function which we call from app.js, passing in the necessary context for the route.
In case of the login this could look like so:

If you are using a recent version of ExpressJS, 2.5.8 as of this writing, the command-line
interface provides a way to quickly generate the express app. If you type express [options]
name-of-the-app, it will generate a folder named name-of-the-app in the current working directory. Not surprisingly, express creates the
/routes folder for you, which is already taking you in the right direction. I only learnt this recently and have so far been doing the hard work of starting from scratch each time. Sometimes spending a little more time on the manual helps!
RTFM FTW.

Once we have the route files as described, it is easy to load them from
app.js. Using the filesystem module we can quickly load each module and call
init() on each one of them. We do this before the app is started. The
app.js skeleton looks like so:

Now we can just keep adding more routes, grouped in their own file and continue to build several endpoints without severerly complicating the app.js. The app.js file now follows the
Open-Closed-Principle (app.js is open for extension but closed for modification).

In short…

As you can see, it is actually a simple idea, but when applied to other parts of your application, it can substantially reduce the maintenance overhead. So in summary:

Establish conventions to standardize a certain aspect of the program. In our case it was routes.

Its been a while since I posted anything on this blog. Thought I’ll break the calm with a quick post about my recent sketch.

I generally use
Autodesk SketchBook Pro (SBP) on my Mac for the intial doodling. I then develop a fairly finished sketch before importing it into Photoshop for any post-processing. Luckily SBP saves the files in PSD format, making it easy to do the Photoshop import. The
following sketch was entirely done in SBP:

This was done in about 30 mins as a quick sketch to demonstrate the use of SBP and a Wacom tablet to a close friend. He was quite impressed and immediately ordered a bunch of items, including a
Wacom Bamboo stylus for the iPad. I guess
marketing wouldn’t be a bad alternate career!

With a powerful platform like iOS, it is not surprising to have a variety of options for drawing. Picking the one that works best may sometimes require a bit of experimentation.
Case in point: a pie chart whose slices had to be animated as the values changed over time.
In this blog post, I would like to take you through various stages of my design process before I ended up with something close to what I wanted. So lets get started.

First steps

Lets quickly look at the array of options that we have for building up graphics in iOS:

Use the standard Views and Controls in UIKit and create a view hierarchy

Use the UIAppearance protocol to customize standard controls

Use UIWebView and render some complex layouts in HTML + JS. This is a surprisingly viable option for certain kinds of views

Use UIImageView and show a pre-rendered image. This is sometimes the best way to show a complex graphic instead of building up a series of vectors. Images can be used more liberally in iOS and many of the standard controls even accept an image
as parameter.

Create a custom UIView and override drawRect:. This is like the chain-saw in our toolbelt. Used wisely it can clear dense forests of UI challenges.

Apply masking (a.k.a. clipping) on vector graphics or images. Masking is often underrated in most toolkits but it does come very handy.

Use Core Animation Layers: CALayer with shadows, cornerRadius or masks. Use
CAGradientLayer, CAShapeLayer or CATiledLayer

Create a custom UIView and render a CALayer hierarchy

As you can see there are several ways in which we can create an interactive UI control. Each of these options sit at a different level of abstraction in the UI stack. Choosing the right combination can thus be an interesting thought-exercise. As one gains
more experience, picking the right combination will become more obvious and also be a lot faster.

A path for the slice

With that quick overview of the UI options in iOS, lets get back to our problem of building an animated Pie Chart. Since we are talking about animation, it is natural to think about Core Animation and CALayers. In fact, the choice of a
CAShapeLayer with a path for the pie-slice is a good first step. Using the
UIBezierPath class, it is easier than using a bunch of CGPathXXX calls.

<figure class='code'><figcaption></figcaption>

1234567891011121314151617181920212223

-(CAShapeLayer*)createPieSlice{CAShapeLayer*slice=[CAShapeLayerlayer];slice.fillColor=[UIColorredColor].CGColor;slice.strokeColor=[UIColorblackColor].CGColor;slice.lineWidth=3.0;CGFloatangle=DEG2RAD(-60.0);CGPointcenter=CGPointMake(100.0,100.0);CGFloatradius=100.0;UIBezierPath*piePath=[UIBezierPathbezierPath];[piePathmoveToPoint:center];[piePathaddLineToPoint:CGPointMake(center.x+radius*cosf(angle),center.y+radius*sinf(angle))];[piePathaddArcWithCenter:centerradius:radiusstartAngle:angleendAngle:DEG2RAD(60.0)clockwise:YES];// [piePath addLineToPoint:center];[piePathclosePath];// this will automatically add a straight line to the centerslice.path=piePath.CGPath;returnslice;}

</figure>

The path consists of two radial lines originating at the center of the cirlce, with an arc between the end-points of the lines

The angles in the call to addArcWithCenter use the following unit-coordinate system:

DEG2RAD is a simple macro that converts from degrees to radians

When rendered the pie slice looks like below. The background gray circle was added to put the slice in the context of the whole circle.

Animating the pie-slice

Now that we know how to render a pie-slice, we can start looking at animating it. When the angle of the pie-slice changes we would like to smoothly animate to the new slice. Effectively the pie-slice will grow or shrink in size, like a radial fan of cards
spreading or collapsing. This can be considered as a change in the path of the
CAShapeLayer. Since CAShapeLayer naturally animates changes to the
path property, we can give it a shot and see if that works. So, let’s say, we want to animate from the current slice to a horizontally-flipped slice, like so:

To achieve that, lets refactor the code a bit and move the path creation into its own method.

-(CGPathRef)createPieSliceWithCenter:(CGPoint)centerradius:(CGFloat)radiusstartAngle:(CGFloat)degStartAngleendAngle:(CGFloat)degEndAngle{UIBezierPath*piePath=[UIBezierPathbezierPath];[piePathmoveToPoint:center];[piePathaddLineToPoint:CGPointMake(center.x+radius*cosf(DEG2RAD(degStartAngle)),center.y+radius*sinf(DEG2RAD(degStartAngle)))];[piePathaddArcWithCenter:centerradius:radiusstartAngle:DEG2RAD(degStartAngle)endAngle:DEG2RAD(degEndAngle)clockwise:YES];// [piePath addLineToPoint:center];[piePathclosePath];// this will automatically add a straight line to the centerreturnpiePath.CGPath;}-(CAShapeLayer*)createPieSlice{CGPointcenter=CGPointMake(100.0,100.0);CGFloatradius=100.0;CGPathReffromPath=[selfcreatePieSliceWithCenter:centerradius:radiusstartAngle:-60.0endAngle:60.0];CGPathReftoPath=[selfcreatePieSliceWithCenter:centerradius:radiusstartAngle:120.0endAngle:-120.0];CAShapeLayer*slice=[CAShapeLayerlayer];slice.fillColor=[UIColorredColor].CGColor;slice.strokeColor=[UIColorblackColor].CGColor;slice.lineWidth=3.0;slice.path=fromPath;CABasicAnimation*anim=[CABasicAnimationanimationWithKeyPath:@"path"];anim.duration=1.0;// flip the pathanim.fromValue=(__bridgeid)fromPath;anim.toValue=(__bridgeid)toPath;anim.removedOnCompletion=NO;anim.fillMode=kCAFillModeForwards;[sliceaddAnimation:animforKey:nil];returnslice;}

</figure>

In the refactored code, createPieSlice: just calls the createPieSliceWithCenter:radius:startAngle:endAngle function for the from and to-paths and sets up an animation between these two paths. In action, this looks like so:

Yikes! That is definitely not what we expected. CAShapeLayer is morphing the paths rather than growing or shrinking the pie slices. Of course, this means we need to adopt more stricter measures for animating the pie slices.

Raising the level of abstraction

Clearly CAShapeLayer doesn’t understand pie-slices and has no clue about how to animate a slice in a natural manner. We definitely need more control around how the pie slice changes. Luckily we have an API that gives a hint at the kind of abstraction
we need: a pie slice described in terms of {startAngle, endAngle}. This way our parameters are more strict and not as flexible as the points of a bezier path. By making these parameters animatable, we should be able to animate the pie-slices just
the way we want.

Applying this idea to our previous animation example, the path can be said to be changing from
{-60.0, 60.0} to {120.0, -120.0}. By animating the
startAngle and endAngle, we should be able to make the animation more natural. In general, if you find yourself tackling a tricky problem like this, take a step back and check if you are at the
right level of abstraction.

Custom CALayer, the PieSliceLayer

If a CAShapeLayer can’t do it, we probably need our own custom CALayer. Let’s call it the
PieSliceLayer and give it two properties: … you guessed it…
startAngle and endAngle. Any change to these properties will cause the custom layer to redraw and also animate the change. This requires following a few standard procedures as prescribed by Core Animation Framework.

Firstly don’t @synthesize the animatable properties and instead mark them as
@dynamic. This is required because Core Animation does some magic under the hood to track changes to these properties and call appropriate methods on your layer.

Override actionForKey: and return a CAAnimation that prepares the animation for that property. In our case, we will return an animation for the
startAngle and endAngle properties.

Override initWithLayer: to copy the properties into the new layer. This method gets called for each frame of animation. Core Animation makes a copy of the
presentationLayer for each frame of the animation. By overriding this method we make sure our custom properties are correctly transferred to the copied-layer.

Finally we also need to override needsDisplayForKey: to tell Core Animation that changes to our
startAngle and endAngle properties will require a redraw.

With that we now have a custom PieSliceLayer that animates changes to the angle-properties. However the layer does not display any visual content. For this we will override the
drawInContext: method.

Rendering the PieSliceLayer

Here we draw the slice just the way we did earlier. Instead of using UIBezierPath, we now go with the Core Graphics calls. Since the
startAngle and endAngle properties are animatable and also marked for redraw, this layer will be rendered each frame of the animation. This will give us the desired animation when the slice changes its inscribed angle.

It all comes together in PieView

When we originally started, we wanted to build a Pie Chart that animated changes to its slices. After some speed bumps we got to a stage where a single slice could be described in terms of start/end angles and have any changes animated.

If we can do one slice, we can do multiples! A Pie Chart is a visualization for an array of numbers, where each numbers is an instance of the
PieSliceLayer. The size of a slice depends on its relative value within the array. An easy way to get the relative value is to normalize the array and use the normal value
[0, 1] to arrive at the angle of the slice, ie. normal * 2 * M_PI. For example, if the normal value is 0.5, the angle of the slice will be
M_PI or 180°.

Managing the slices

The PieView manages the slices in a way that makes sense for a Pie Chart. Given an array of numbers, the
PieView takes care of normalizing the numbers, creating the right number of slices and positioning them correctly in the pie. Since
PieView will be a subclass of UIView, we also have the option to introduce some touch interaction later. Having a UIView that hosts a bunch of CALayers is a common approach when dealing with an interactive element like the PieChart.

The PieView exposes a sliceValues property which is an
NSArray of numbers. When this property changes, PieView manages the CRUD around the
PieSliceLayers. If there are more numbers than slices, PieView will add the missing slices. If there are fewer numbers than slices, it removes the excess. All the existing slices are updated with the new numbers. All of this happens
in the updateSlices method.

Unit testing in Javascript, especially with RequireJS can be a bit of challenge. Jasmine, which is our unit testing framework does not have any out of the box support for RequireJS. I have seen a few ways of integrating
RequireJS but that requires hacking the SpecRunner.html file, the main test harness that executes all jasmine tests. That wasn’t really an option for us as we were using a ruby gem called
jasmine to auto generate this html file from our spec files. There is however an
experimental gem created by Brendan Jerwin that provides RequireJS integration. We did consider that option before ruling it out for lack of official support. After a bit of flailing around, we finally
hit upon a little nugget in the core jasmine framework that seemed to provide a solution.

Async tests in Jasmine

For a long time, most of our tests used the standard prescribed procedure in jasmine, which is
describe() with a bunch of it()s. This worked well for the most part until we switched to RequireJS as our script loader. Then there was only
blood red on our test pages.

Clearly jasmine and RequireJS have no mutual contract, but there is a way to run async tests in jasmine with methods like
runs(), waits() and waitsFor(). Out of these,
runs() and waitsFor() were the real nuggets, which complement each other when running async tests.

waitsFor() takes in a function that should return a boolean when the work item has completed. Jasmine will keep calling this function until it returns true, with a default timeout of 5 seconds. If the worker function doesn’t complete by
that time, the test will be marked as a failure. You can change the error message and the timeout period by passing in additional arguments to
waitsFor().

runs() takes in a function that is called whenever it is ready. If a runs() is preceded by a waitsFor(), it will execute only when the waitsFor() has completed. This is great since it is exactly what we need to make our RequireJS based tests
to run correctly. In code, the usage of waitsFor() and runs() looks as shown below. Note that I am using
CoffeeScript here for easier readability.

— Short CoffeeScript Primer —

In CoffeeScript, the -> (arrow operator) translates to a function(){} block. Functions can be invoked without the parenthesis,eg:
foo args is similar to foo(args). The last statement of a function is considered as the return value. Thus,
() -> 100 would become function(){ return 100; }“With this primer, you should be able to follow the code snippet below.”

Jasmine meets RequireJS

waitsFor() along with runs() holds the key to running our RequireJS based tests. Within
waitsFor() we wait for the RequireJS modules to load and return true whenever those modules are available. In
runs() we take those modules and execute our test code. Since this pattern of writing tests was becoming so common, I decided to capture that into a helper method, called
ait().

If are wondering why the name ait(), it is just to keep up with the spirit of jasmine methods like
it for the test case and xit for ignored test case. Hence
ait, which stands for “async it”. This method takes care of waiting for the RequireJS modules to load (which are passed in the
modules argument) and then proceeding with the call to the testFn in
runs(), which has the real test code. The testFn takes the modules as individual arguments. Note the special CoffeeScript syntax
arrayOfModules... for the expansion of an array into individual elements.

The ait method really reads as: itwaitsFor() the RequireJS modules to load and then
runs() the test code

The test case should do something nice, takes in two modules:
obedient_model and sub_model, which resolve to the arguments:
ObedientModel and SubModel, and then executes the test code. Note that I am relying on the default timeout for
the waitsFor() method. So far this works great, but that may change as we build up more tests.

In the world of jQuery or for that matter, any JavaScript library, callbacks are the norm for programming asynchronous tasks. When you have several operations dependent on the completion of some other operation, it is best to handle them as a callback. At
a later point when your dependent task completes, all of the registered callbacks will be triggered.

This is a simple and effective model and works great for UI applications. With
jQuery.Deferred(), this programming model has been codified with a set of utility methods.

$.Deferred() is the entry point for dealing with deferred operations. It creates a
“promise” (a.k.a Deferred object) to trigger all the registered
done() or then() callbacks once the Deferred object goes into the
resolve() state. This is according to the CommonJS specification for Promises. I am not going to cover all the details of $.Deferred(), since the
jQuery docs do a much better job. Instead, I’ll jump right into the main topic of this post.

The soup of AMD, $.Deferred and Google Maps

In one of my recent explorations with web apps, the AMD pattern turned out to be extremely useful. AMD, with the
RequireJS library, forces a certain structure on your project and makes building large web apps more digestible. Abstractions like the require/define calls allows building apps that are more composable and extensible. It
sure is a great way to think about composable JS apps in contrast to the crude
<script> tags.

With these abstractions, it was easier to think of the app as a set of modules. Some modules provide base level services, while others depend on such
service-modules. One particular module, which also happens to be the entry point into the app, was heavily dependent on the Google Maps API. Early on, it was decided to never keep the user waiting for the maps to load and allow interaction right from
the get go.This meant that they could do some map-related tasks even before the maps API had loaded. Although this felt impossible at the onset, it turned out to be quite easy, all thanks to
$.Deferred().

The first step was to wrap the Google Maps API in a GoogleMaps object. This hides away the details about loading the maps while allowing the user to carry on with the map related tasks.

With this, we can continue making calls to each of these methods as if the maps API is already loaded. Each time we make a call, it will be pushed into the deferred queue. At some point, when the maps API is loaded, we need to call a
resolve() on the deferred object. This will cause the queue of calls to be flushed and resulting in real work being done.

One aside on the code above is the use of _.bind(function(){}, this)_. This is required because the callback to
done() changes the context of this. To keep it pointing to the GoogleMaps instance, we employ
_.bind().

The google maps API has an async loading option with a callback name specified in the query parameter for the api URL. When the api loads, it will call this function (in our case:
gmapsLoaded). Note that this needs to be a global function, ie. on the
window object. A require call (from RequireJS) makes it easy to load this script.

Once the callback is made, we finally call resolve() on our deferred object:
_mapsLoaded. This will trigger the enqueued calls and the user starts seeing the results of his searches.

Summary

In short, what we have really done is:

Abstract the google maps API with a wrapper object

Create a single $.Deferred() object

Queue up calls on the maps API by wrapping the code inside done()

Use the async loading option of google maps api with a callback

In the maps callback, call resolved() on the deferred object

Make the user happy

Demo

In the following demo, you can start searching on an address even before the map loads. Go ahead and try it. I have deliberately put in a
5 second delay on the call to load the maps API, just for a flavor of 3G connectivity!

Don’t forget to browse the code in your Chrome Inspector. You do use Chrome, don’t you? ;-)

As I
blogged about earlier, Octopress is a great framework for writing blog posts and packs in all the features for writing a code-centric blogs. Of course, it goes without saying that the blog also looks awesome as if
designed by a true designer. Some of the nicer things about writing posts is that there are rake tasks that do most of the grunt work:

rake new_post[“Just type the title of the post here in plain English”]

This will create a new file under source/_posts called 2011-09-29-just-type-the-title-of-the-post-here-in-plain-english.markdown

rake new_page[about]

This will create a new page under source/about, called index.markdown

rake preview

This sets up a local webserver on http://localhost:4000 and starts monitoring the
source folder for any changes. It automatically generates the corresponding HTML/CSS for the Markdown/SASS files respectively.

Speed up

If you have just migrated from a
Wordpress blog or have lots of posts under your source/_posts, the rake task that generates the HTML output can take a very long time (several minutes). Obviously if you are just working on one post, there is no need to wait for the entire
site to generate. What you are looking for is the rake isolate[partial_post_name] task.

Using rake isolate, you can “isolate” only that post you are working on and move all the others to the
source/_stash folder. The partial_post_name parameter is just some words in the file name for the post. For example, if I want to isolate the post from the earlier example, I would use

<figure class=’code’>

1

rakeisolate[plain-english]

</figure>

This will move all the other posts to source/_stash and only keep the
2011-09-29-just-type-the-title-of-the-post-here-in-plain-english.markdown post in
source/_posts. You can also do this while you are running rake preview. It will just detect a massive change and only regenerate that one post from then on.

All set to publish

When you are ready to publish your site, just run rake integrate and it will pull all the posts from
source/_stash and put them under source/_posts. Now you can run
rake generate and then rake deploy to publish your updated blog.

If these seem like lot of commands to remember, don’t worry, they will become second nature once you do it few times. As a summary, below are all the tasks that we talked about in this post. The description of each task comes from the Rakefile used
by Octopress. I just did a rake list -T to get a dump of all the tasks.

rake new_post[title]: Begin a new post in source/_posts

rake new_page[filename]: Create a new page in source/(filename)/index.markdown

rake generate: Generate jekyll site

rake deploy: Default deploy task

rake preview: Preview the site in a web browser

rake isolate[filename]: Move all other posts than the one currently being worked on to a temporary stash location (stash) so regenerating the site happens much quicker

rake integrate: Move all stashed posts back into the posts directory, ready for site generation

I have been using Wordpress for few years now and have been very happy with its features. In the past year, I have tried several times to change the theme on my blog and also semantify my posts by using
Markdown as my
de facto style. Of course, none of it happened and I was still using a combination of HTML and Rich Text Editor for formatting my posts.The more I delayed, the more I realized that there were a lot more reasons to
NOT like Wordpress:

I wanted to use
Markdown to write all my posts and Wordpress forced me to use HTML. I could certainly use some plugins to upload a markdown file which would then convert it into html, but that meant I had to store these markdown files in the wordpress database: less than
optimal.

Code formatting was not an easy task. I used Live Writer as my primary blog editor and it had a few plugins that can give you inline code highlighting. Although you get real time view of your syntax highlighted code, it internally converted everything to
HTML and discarded the original code snippet. Also you had to be careful about editing around that code snippet as a simple delete in the wrong place would require redoing the whole process. I felt it was too much work just to get some code highlighting.

The Backup and local testing scenario was involving. For backup, I could either export all my posts in WXR format or take a dump of my database. To re-create my blog locally meant getting an installation of MAMP and then importing the WXR or the database
backup. I would have preferred a less intrusive approach to try out my wordpress site locally.

The wordpress technology stack was not very exciting for me. I never really enjoyed PHP and learnt it only to maintain my Wordpress site.

Exploring beyond Wordpress

I had seen a few bloggers use GitHub as their blogging engine with the
Jekyll framework to auto-generate their HTML pages from their markdown posts. This was very inviting except for the fact that I had to store all my posts publicly on Github. Even if I purchased a private plan from Github, the storage allocated was quite
minimal. GitHub for me was definitely not cost effective.

About this time, I saw a
tweet from Matt Gemmell where he migrated from Wordpress to a different engine called
Octopress. After reading his blog entry, I realized this was exactly the kind of framework I wanted.
Matt has lot more content than I do and seeing him convert his blog successfully gave me the courage to do the same. Thus began an almost
10 day journey to convert my Wordpress blog to an Octopress blog!

Default theme is very beautiful with rich support for styling via
Compass/SASS

Modifying the theme is simple as its based on
Jekyll. If you haven’t explored Jekyll yet, I strongly encourage to give it a try.

Writing plugins is also quite simple and uses the Liquid templating system

My entire blog is contained within a folder from which I can generate the HTML

Uploading is taken care with a rake task to deploy (Did I mention Octopress uses
Ruby!)

I can preview my site locally with a simple rake preview command that starts up a local web server. It monitors changes to my blog and auto generates the html. This is great for composing posts and testing on the fly.

Excellent integration with social features like Google+, Twitter, Disqus, etc.

Migrating Wordpress posts

This was the most elaborate part of the process. Octopress requires that you do write all your posts in Markdown or Textile, however my Wordpress posts were all plain html. So I needed some converter that would do this transformation for me. Luckily on
Matt’s blog I read about the
exitWP plugin that takes care of this conversion. Although not seamless,
exitWP did give me a good starting ground since it converts all the posts to a Jekyll-compliant site.

I did have to go in and change several of my posts that used code snippets. I had been using a variety of code prettifiers over the years and the corresponding HTML was not the best for a Markdown conversion. It did mess up lot of my posts and I spent several
hours touching up the Markdown text.

I also got the chance to fix some of my old Urls that were still pointing to my old blog on Live Spaces. I also decided to make all my internal blog links relative and this required a combination of grep/awk and some manual intervention to fix up all the
links. Overall it was a fun exercise experimenting with some bash shell commands and a mix of some ruby scripting.

Migrating Wordpress comments to Disqus

Octopress has excellent integration with Disqus, a hosted comment management system. Disqus works by linking all the comments to a specific Url. As long as your posts maintain the same Url you can just use Disqus to import
all of your comments into your octopress blog. In my case, my comments were all on Wordpress and I had to first import
them into Disqus. As it turns out, this wasn’t a straightforward process.

I started by exporting my comments from Wordpress in the standard WXR Xml format. When I tried to import this file into Disqus, it choked by complaining that the
<link> tags were missing. The <link> tag contains the url that links the post to the comments. To fix that I wrote some simple ruby code to update the WXR with the proper
<link> tags. Now trying the import again inside Disqus went through without issues and all my comment threads got pulled in. The threads however were using the raw wordpress url (http://blog.pixelingene.com/?p=123) and I wanted to
use a more semantic url of the form http://blog.pixelingene.com/year/month/the-post-slug. To fix this I created a simple Url map (CSV) and used the
Disqus Url Mapping Tool to fix these links.

Finally with all that done, my comments were safe and sound inside Disqus, with the right permalink-Urls. Now the next part was to link them up with my blog. Luckily this is as simple as specifying a
disqus_short_name in the Octopress config file!

Url Rewrites and other changes

Now that I had chosen to use a semantic permalink to my posts, I also had to make sure my existing links to the posts continued working. This was a matter to having some redirects set up on my website. I used the standard Apache directives (RewriteCond,
RewriteRule) in my .htaccess to permanently redirect all of my old urls.

A few other things I had to do include:

404 page

Plugins (Liquid Tags) to embed Silverlight apps and Youtube videos

Change the feed Url from the default /atom.xml to my FeedBurner url

The one thing I havent’ done yet is modify the theme from the default. I’ll probably get to it one of these days.

Epilogue

So that’s my experience with the Wordpress to Octopress migration. Although not a smooth transition, it wasn’t terribly bad and I actually enjoyed the process using a variety of tools. I have tried my best to make sure that all existing wordpress
links, images, download links, demos, etc. continue working, but there is always that infinitesimal probability of missing something. If something does break, I’ll find out in one way or other. Until then
“enjoy the new blog!”

In JavaScript, if you set a property on the prototype, it is like a
static property that is shared by all instances of the Function. This is
common knowledge in JavaScript and quite visible in the code. However if
you are writing all your code inCoffeeScript, this fact
gets hidden away by the way you declare properties.

If you declare properties without the @ symbol, you are effectively
creating properties on the prototype of the class. This of course works
great if you want a shared property but certainly not the way to go if
you want per-instance properties. I missed out using the @ symbol and my
app went bonkers. This simple oversight cost me fair bit of time
debugging it. The right thing to do was using the @property syntax,
since I needed per-instance properties. In the code snippet shown above,
the staticProp is a property on the prototype of the Site function.@instanceProp is an instance property that will be available on each
instance of Site. CoffeeScript translates the above source to the
following JavaScript:

In JavaScript, if you set a property on the prototype, it is like a static property that is shared by all instances of the Function. This is common knowledge in JavaScript and quite visible in the code. However if you are writing all your code in CoffeeScript,
this fact gets hidden away by the way [...]

In the previous post we saw how the D3.js library could be used to render tree diagrams. If you haven’t read that post yet, I would encourage reading it as we will be expanding on it in this post. Now that we have a nice tree diagram of a hierarchy, it would
be good to [...]

In the past few weeks, I have spent some time evaluating some visualization frameworks in Javascript. The most prominents ones include: Javascript InfoVis Tookit, D3 and Protovis. Each of them is feature rich and provides a varieties of configurable layouts.
In particular I was impressed with D3 as it gives a nice balance of features [...]

Of late, I have been building some Html/Javascript apps and exploring a bunch of javascript libraries, including the usual suspects (jQuery, jQuery UI, jQuery template, underscore, etc). The more interesting ones are visualization libraries like d3, isotope,
highcharts. In this post, I will focus on a specific scenario in the isotope.js library. Isotope.js Isotope.js is [...]

A few days back while I was busy designing some UI for a Silverlight app, I accidentally hit upon this fun hack. If you assign a shared Brush resource to the CaretBrush property of the TextBox control, then you start seeing some crazy blinking-light effects
at places where the shared Brush is used. It is [...]

After having worked full-time for several years in the Corporate world, I have decided to make a career change and jump on to Consulting. I have joined my friends at , where I’ll be working in the Financial district of New York building solutions using Microsoft
.Net, C#, WPF, Silverlight and others. I have known [...]

I have been playing around with Quartz Composer (included as part of the Developer tools installation on Mac OSX) for almost a year. It’s a great tool for creating screen savers, music visualizations and also for quick prototyping of some visual concepts.
I personally find the patch-based approach to solving problems quite refreshing and offers [...]

In this post I want to talk about some interesting ideas regarding a control called TokenizingControl ? What is that you may ask, so lets start with the basics. A Tokenizing control takes in some text, delimited by some character and converts that text to
a token, a token that is represented by some UI [...]