I am always on the lookout for new ways of working, particularly to make my learning better. Neal recommends putting together such a radar to help focus how we learn by being more strategic, rather then tactical. That and it serves as reminder of the things we might want to look into :) (well for me at least).

So here’s my first go at putting such a list together:

Hold

RequireJs (Tools)

Assess

Gulp (Tools)

Scala (Languages and Frameworks)

Swift (Languages and Frameworks)

ReactJs (Languages and Frameworks)

Living CSS Style Guides (Techniques)

Gradle (Tools)

Play Framework (Languages and Frameworks)

SnapCI (Tools)

ES6 Transpilers (Tools)

Trial

Browserify (Languages and Frameworks)

Functional programming (Techniques)

ES6 (Languages and Frameworks)

Dashing (Languages and Frameworks)

Phantomas (Tools)

Build your own Technology Radar (Techniques)

Docker (Tools)

Programming by Intention (Techniques)

AWS (Platforms)

CircleCi (Tools)

Adopt

Grunt (Tools)

Vagrant (Tools)

Heroku (Platforms)

Linode (Platforms)

CodeShip (Tools)

Git Pull Request Workflow (Techniques)

The next step is to experiment with this visual tool for displaying your own Technology Radar. Let’s revisit this post in 6 months to see how effective this technique was and what has changed in my technology bubble.

RuboCop was an interesting tool, as it helped me write code in more idomatic way, i.e. more like a Ruby developer would. Some of the methods I had written used things like get_ or has_. The Ruby way you don’t bother with the get_something, instead it just becomes something. has_something simply becomes something?.

One thing that bit me though was changing " to ' when no string interpolation was happening. Changing this:

spec.files = `git ls-files -z`.split("\x0")

to

spec.files = `git ls-files -z`.split('\x0')

The Gem would no longer build or install throwing string contains null byte message error (somewhat after the fact). The problem: single quotes around split('\x0'). Need to look into exactly why this was a problem. I hope I am not being unfair to RuboCop, but it’s primary focus is to help your code follow the Ruby Style guide and not point out code hot spots.

The outcome of refactoring the code was a few more classes. I extracted the exception handling to it’s own class and created and error code finder. Methods that were manipulating objects to get there work done, were updated to have less knowledge and only work on what they needed. Take this method as an example:

This method actually has two problems for one it was making duplicate calls (response[accounts]). This could have been fixed by extracting the calls to a variable; however by fixing the underlying problem (the utility function behaviour) would also fix that issue. The method knew too much about response object and what it contains in order to get it’s work done, the change is quite simple extract the knowledge to the calling method:

Interestingly enough after extracting my Exception Handler object, I was now warned about a Control Parameter smell. The solution there was to revert the extraction of helper method for request errors that raised an exception based on the status code by inlining them back again.
I found this iterative approach of running RuboCop and Reek, really helpful and it certainly led to cleaner looking code. At least I thinks so :). Again, do not blindly follow metrics, but use your judgement. In the absence of having someone to review your work these tools certainly help. Overall an interesting exercise and productive exercise.

James Whittaker is pissed off, really pissed off. We are held hostage by our browser and we should not stand for it.

That’s how he opens his talk on his vision for the future. I stumbled across this again the other week, when I came across this post, where he also goes over some tips for stage presence.

Really quickly let’s go over these as they are super handy. There are 5 (well really 4.5) bullet points of about stage presence worth keeping in mind:

Come out swinging

Attention span interlude

Know your shit

Make it epic

Be brief, be right, be gone

He gives regular sessions on campus about Stage presence and I was able to watch one of the recordings and thoroughly enjoyed it and recommend watching it, if you get the chance.

This post is not about stage presence though, rather his view of the future. It does provide a nice lead in though :). Reading that post reminded me that I saw James Whittaker give a talk in person at our office on this topic, and he certainly came out swinging and kept on swinging for the whole duration. Below is a link to a similar talk he did:

For the most part I enjoyed his talk about his vision of the future. The jist was that we shouldn’t have to go to the web to the find (to hunt) the information we are after. We shouldn’t need to context switch. We don’t need apps to do that either (to gather). Our tools should be context aware and fetch the information for us (to farm). His example centered around going to a concert with his daughter after having received an email from her, asking him go with her to see Of Monsters and Men (loved that album and thanks for putting me on to them ;)).

His tools, in this case Outlook, should be context aware and be smart enough to fetch maps/travel directions/suggestions for restaurants and book the tickets. He bemoans the context switch out of whatever tool you are in to open a browser or an App to complete those tasks. He firmly(?) believes that Microsoft is one of the few companies that is in a position to deliver on this proposition based on the tools and services they offer. These tools are ‘Super apps’! Things like Outlook + ‘Bing knows’

While I agree with the premise, at the time I came away from the talk feeling another walled garden in the making. Rather than building open APIs and services it felt heavily slanted to being embedded in the Microsoft ecosystem. Despite mentioning Twitter and Facebook as well (and does talk tongue in cheek about ‘this is branding’ when referring to the XBox and Surface), I couldn’t shake that feeling. For what it’s worth I have similar sentiments towwards Apple… but they do make lovely products…

I believe that in order for companies (particularly Microsoft) to remain relevant, they need to be more open and allow tools from all sources to build on these systems simply and efficiently. Innovation seems to happen more frequently and rapidly outside of large bureaucratic companies and while they have the resources to deliver, they are slow to do so. Just in case you didn’t realise, I work at Skype, at least for now.

The web was built on openness and I find it quite tragic that more and more we are seeing tiered service provisioning, vendor lock in and data lock in. Yes, yes, companies need to make money, I am not that naive, but there are surely better ways.

I don’t think we should be locked into a world where the only way achieve this vision is with my Windows Phone (or iPhone for that matter), i.e. one ecosystem. Maybe I am the odd one out, at work I have a Windows machine, my phone is an Android device, my home setup is a Mac+iPad. Building so called ‘Super apps’ for all platforms is a big ask. And therein lies the crux of the matter… All of these devices already have a ‘Super App’ in common: the browser! We have had it for decades! Yes it was a pain to build for all of the different makes and versions; and while there are still problems, the last 3 or so years have seen an incredible convergence in supported features and functionality.

We might have afforded the browser an ‘incredible’ amount of power, I use it for almost everything. I don’t mind using apps; however my context switch happens when I need to leave the browser to use Outlook for example. I would argue that we need to invest more into making our ‘web apps’ better (services, uis, browsers). Turn these web apps into ‘Super apps’ that leverage the Ueber App that is the venerable browser, so that I don’t need to have Outlook or something else open to get what I need. Rather than invest in a walled garden of comfort that is proprietary and closed. It should run on any device, anywhere I am connected and that is the browser! All hail the Ueber app home of the super apps.

A few weeks back I decided to add CoderWall badges to the feed on my site. I could have just grabbed an existing gem but I decided to build my own. If you are truly keen you can also find it over at Rubygems.org and add to the other 1,245 downloads :).

To get the ball rolling I followed the steps described over at How I Start. The first stab ended up looking like this:

It simply fetched JSON from the API for a given username and returned a collection of badges. Over the next couple of iterations I reworked a few things and added support for user details and accounts. For testing purposes (and to speed things up) I used Webmock to fake responses from the service. The most interesting thing to solve, was how to dynamically assign attr_accessors to the account object. I eventually found that you could do so by using a combination of singleton_class.class_eval and self.instance_variable_set. With the features done I looked around at other gems and what their README’s and tool chain looked like.

That just moved the complexity around, but at least the Client object was now a little simpler, so to fix the complexity I extracted methods from the the send_request, which looked started off as follows:

I am going to stop here. In a follow up post I will talk specifically about Rubocop and Metric_fu and how they further impacted the design and reability of the code. Before I go though, I wanted to finish up with some thoughts on using Flog and how it changed my code.

I started with one object that did everything and through a series of refactorings I ended up with several smaller more cohesive objects that also followed the Single Responsibility principle more closely (I wasn’t there yet and probably still am not).

I felt that my initial implementation was simple and readable enough. But that’s just the thing isn’t it? We feel that our code is good enough, but statistics can back these ‘feelings’ up or indeed refute them. I am not saying that one should blinbdly follow these kind of metrics and drive our code based off of these, but they are a good source of information and as this little experiment has shown can help improve the code. In the absence of being able to pair with somone or have someone else review your code, Flog proved very useful. Overall I am happier with having a class for API calls and who’s methods are more intention revealing. Likewise with my builder object and it’s methods, in the next post I will show how I continued on the improvement path for that particular class using Metric_fu (Reek in particular) and Rubocop.

I recently switched over the source control of my blog from Bitbucket to Github, because I wanted to try out a new workflow with regards to editing and publishing posts.

As I tend to create a new git branch for each post I am working on, I wanted to use the pull request approach to publishing postsI first came across this idea, thanks those wonderful folks over at ThoughtBot. Now granted I am not collaborating with others on posts; however I still find this review process handy. Reading the post in a different context has been beneficial. Furthermore I now tend to give myself a few days between writing and posting as a result of this process. Using Github and their editor I can review, re-read and edit posts at my convenience. So far it’s worked well for me.

This got me thinking though: are there any other improvements I could make to my workflow… Well yes there are. As I mentioned I tend to create a branch for each post, followed by running the new post rake task. I started by modifying the tasks for posts and pages to create a new branch for me using the title. Then I realised I could take it even further, create an initial commit and create a new tracked remote branch. Here’s what the output looks like:

The next thing I wanted to improve upon was the publishing step. Commit/Push/Generate and Deploy were the steps I used in the past, a bit long winded and repetitive. Also if I was not at home, then I had to wait to publish an update. Given how I am now using Pull Requests and use Github to sign off on and merge these, why not use CI to build and publish the Blog on merge to master? So I created a new project over at CodeShip, left the test settings empty, but under deployment added:

bundle exec rake generate
bundle exec rake deploy

Now whenever I merge a pull request, CI takes over and publishes my post to my server! Note that, if like me you use rsync, you will need to add CodeShips public key to your authorized_keys in order for Octopress’ rsync publishing to work. This post is the first to feature this new workflow!

Update: it turns out you can complete this workflow using Bitbucket as well!

It’s the Gruntfile for the project in my book. I’ll be honest the principle reason I want to refactor this file is because it makes the book editing quite painful and for the reader being able to make changes is quite difficult. However the same thing can be said for people working with the file, it’s getting to be difficult to see what is happening in this file, so let’s make this better for all.

Let’s start with the karma tasks, these can be extracted to a file called test.js (I am keeping this generic, just in case I decide to switch testing frameworks at a later stage) and let’s save it under a folder called build:

Apart from removing the code for Karma, I also added the grunt.loadTasks directive pointing it to our new created build folder. To validate that everything is still ok, just run grunt karma:dev. Let’s do the same for our browserify task, once again create a new file (called browserify.js) and save it under our build folder:

First post of 2015… Yay. It’s that time of the year for reflection and fresh starts. I guess most folks would have done
that last month. I have been feeling quite sheepish about not having made any resolutions for the new year. However after having
listened to a few Podcasts and read bunch of posts about what folks have done in 2014 on the re-commenced daily commute
I feel even more sheepish… So I have pondered what I would like to get out of 2015 and do/achieve.

For starters I would like to be a tad more analytical/critical. I have realised that I just really consume content, without reflecting
on it too deeply or applying it in earnest to see if it works for me. Kind of like consuming an entire box set of some TV
series in one or a couple of sittings, to the point where all episodes just blend into one. To help me with that I want to put up least
one blog post a week, nd yes I realise it’s the second week of January and I am already behind.. The idea of course is
to write more and get better at it, but I don’t just want to say share something I learned, but also demonstrate why it’s useful.
Or should I have read something that provoked some thoughts share and discuss these. Well that’s the intention anyway…

I actually really, really want to make progress on some of my side projects, that have suffered from fits and starts over the
years, right at the top of that list is making good progress on my book.
Short aside for those that are reading this and following it’s progress, I have started work on the model and testing it.

I have also been tinkering with a few apps/games over the years. For some reason a few months back, I had been reminiscing
about some old computer games from the Spectrum days that I used to play as a kid. One that sticks out was Football Manager.
When you play it now, well let’s just say… Nostalgia… Regardless I thought it would be fun to try and build a clone,
as I see the potential for some interesting challenges and applications of tools/technologies and services. I believe my
friend Jack termed it as over engineered when I told him about all the things I wanted
to try out as part of it.

The site itself also needs some love, it’s been two years, so it is time for a little refresh.

Another one of those quick posts on state of play about the book project.
First off while updates have been a little slow of late (summer, holidays, work, etc…) I have been busy-ish planning the
next chapters of the book and hope to push some of these out by the end of the month.

I am also please to reveal that over at GitBook, some
130 people have been viewing the book, which is just awesome and roughly 129 more people than I had hoped for! I also see that one person
would be willing to buy the book over at LeanPub.

One thing though is that I have had 0 feedback on the book and it’s content. I have pondered this
for some time now and I have decided to change the book from free to paid. The reasoning being that paying customers might
speak up some more about any issues or better yet things that they like! So starting today I am changing the book to paid on
GitBook, starting at $5.00 for the first section.
If you purchase it now you will of course get the updates/fixes and subsequent chapters as they are written. I have also
published a copy over at LeanPub. You can still
get a free version of the book over at my gihub repository, reading it that way won’t be
nearly as enjoyable as using Gitbook’s reader or the many eBook options you can get with GitBook
or LeanPub. Of course there always the blog posts of the chapters, however I do ask suggest that
if you like the book and it’s content, that maybe buying it is not such a bad idea after all :)

Up until now we have been very much focused on setting up our build pipeline and writing a high level feature tests. And while I promised that it was time to write some code, we do have do a few more setup steps before we can get stuck in. To get confidence in our code we will be writing JavaScript modules using tests and we want those tests to run all the time (i.e. with each save). To that end we need to set up some more tasks to run those tests for us and add them to our our deployment process. Furthermore we want these test to run during our build process.

Setting up our unit test runner using karma

I have chosen Karma as our Unit test runner, if you are new to Karma I suggest you take a peak at some of the videos on the site. It comes with a variety of plugins and supports basically all of the popular unit test frameworks. As our testing framework we will use Jasmine.

Before going to far, let’s quickly create a few folders in the root of our project. src/js is where we will store all of our JavaScript source code, later on we will create a task to concatenate/minify and move it to our app folder:

-> tests
-> unit
-> src
->js

TODO: this for now but really I want to do commonJS

As with all tasks, let’s create a new branch:

> git checkout -b test-runner

And then let’s install the package and add it to our package.json file:

> npm install karma --save-dev

Ok time to create our Karma configuration file, typically you would type in the root of your project:

> karma init karma.conf.js

This would guide you through the process of setting up your test runner, here’s how I answered the setup questions:

Which testing framework do you want to use ?
Press tab to list possible options. Enter to move to the next question.
> jasmine
Do you want to use Require.js ?
This will add Require.js plugin.
Press tab to list possible options. Enter to move to the next question.
> no
Do you want to capture any browsers automatically ?
Press tab to list possible options. Enter empty string to move to the next question.
> PhantomJS
>
What is the location of your source and test files ?
You can use glob patterns, eg. "js/*.js" or "test/**/*Spec.js".
Enter empty string to move to the next question.
> src/js/**/*.js
WARN [init]: There is no file matching this pattern.
> tests/unit/**/*.js
WARN [init]: There is no file matching this pattern.
>
Should any of the files included by the previous patterns be excluded ?
You can use glob patterns, eg. "**/*.swp".
Enter empty string to move to the next question.
>
Do you want Karma to watch all the files and run the tests on change ?
Press tab to list possible options.
> no
Config file generated at "/Users/writer/Projects/github/weatherly/karma.conf.js".

Similar output, with the difference that our process terminated this time because of the warnings about no files macthing our pattern. We’ll fix this issue by writing our very first unit test!

Writing and running our first unit test

In the previous chapter we created a source folder and added a sample module, to confirm our build process for our JavaScript assets worked. Let’s go ahead and create one test file, as well as some of the folder structure for our project:

We could try and run our Karma task again, but this would only result in an error, because we are using the CommonJS module approach and we would see an error stating that module is not defined, because our module under tests uses:

module.exports = TodaysWeather;

In order to fix this we need run our browserify task before our karma task, so let’s register a new task unit in our grunt file to handle this:

In the previous part we wrote our first functional test (or feature test or end 2 end test) and automated the running using a set of Grunt tasks. Now we will put these tasks to good use and have our Continuous Integration server run the test with each commit to our remote repository. There are two parts two Continuous Delivery: Continuous Integration and Continuous Deployment. These two best practices were best defined in the blog post over at Treehouse, do read the article, but here’s the tl;rd:

Continuous Integration is the practice of testing each change done to your codebase automatically and as early as possible. But this paves the way for the more important process: Continuous Deployment.

Continuous Deployment follows your tests to push your changes to either a staging or production system. This makes sure a version of your code is always accessible.

In this ection we’ll focus on Continuous Integration. As always before starting we’ll create a dedicated branch for our work:

git checkout -b ci

Setting up our Continuous Integration environment using Codeship

In the what you will need section I suggested signing up for a few services, if you haven’t by now created an account with either Github and Codeship now is the time! Also if you haven’t already now is the time to connect your Githuib account with Codeship. You can do this by looking under your account settings for connected services:

To get started we need to create a new project:

This starts starts a three step process:

Connect to your source code provider

Choose your repository

Setup test commands

The first step is easy, choose the Github option, then for step two choose the weatherly repository from the list.

If you hadn’t already signed up for Github and hadn’t pushed your changes to it, then the repository won’t be showing up in the list. Link your local repository and push all changes up before continuing.

Not it’s time to set up the third step, set up out test commands. From the drop down labelled Select your technology to prepopulate basic commands choose node.js.

Next we need to tackle the section: Modify your Setup Commands. The instructions tell us that it can use the Node.js version specified in our package.json file, given that we have not added this information previously let’s go ahead and do that now. If you are unsure of the version of Node.js simply type:

node --version

In my case the output was 0.10.28, below is my package.json file, look for the block labelled with engines:

Now let’s edit the Modify your Test Commands section. In the previous chapter we created a set of tasks to run our tests and wrapped them in a grunt command grunt e2e. Let’s add this command to our configuration:

grunt e2e

That’s hit the big save button. Right now we are ready to push some changes to our repository. Luckily we have a configuration change ready to push!

And with that go over to your codeship dashboard and if it all went well, then you should see something like this:

You have to admit that setting this up was a breeze. Now we are ready to configure our Continous Deployment to Heroku.

Setting up Continous Deployment to Heroku

Before we configure our CI server to to deploy our code to Heroku on a successful build, we’ll need to create a new app through our Heroku dashboard:

And click on the Create a new app link and complete the dialogue box.

The name weatherly was already taken so I left it blank to get one assigned, if you do this as well, just be sure to make a note of it as we’ll need it shortly. I choose Europe, well because I live in Europe, so feel free to choose what ever region makes sense to you.

Armed with this information let’s head back to our project on Codeship and let’s configure our deployment. From the project settings choose the Deployment tab and from the targets select Heroku. You will need your Heroku app name (see above) and your Heroku api key which you can find under your account settings under the Heroku dashboard:

We will be deploying from our master branch. Once you are happy with your settings click on the little green tick icon to save the information. Time to test our set up! We just need to make one little change to our app configuration which is handy because that will allow us to commit and a change and verify the whole process from start to finish. In the previous section we have configured our web server to listen on port 3000, well Heroku assigns a part dynamically, so we to account for that by editing our server.js file by adding process.env.PORT to our listen function: