I love using git from the command line, I have SourceTree installed which is a fantastic GUI, but I’ve found that for me personally nothing beats the level of control and response that you get from manually performing Git operations. That said, something that often trips me up is when I start quoting terms in my commit messages, I land up breaking syntax…

1

2

3

$ git commit -m 'test commit 'hello world''

error: pathspec 'test commit hello' did not match any file(s) known to git.

error: pathspec 'world' did not match any file(s) known to git.

While its easy to spot that I’m doing something stupid in the above commit, its very easy to let an apostrophe slip into a longer and more natural commit message:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

$ git commit -m 'Here is the summary of my commit

And a sentence or two where I describe it's background before I move on and create a bulletted list of change explanations...

error: pathspec 'Here is the summary of my commit

And a sentence or two where I describe its' did not match any file(s) known to git.

error: pathspec 'background' did not match any file(s) known to git.

error: pathspec 'before' did not match any file(s) known to git.

error: pathspec 'I' did not match any file(s) known to git.

error: pathspec 'move' did not match any file(s) known to git.

error: pathspec 'on' did not match any file(s) known to git.

error: pathspec 'and' did not match any file(s) known to git.

error: pathspec 'create' did not match any file(s) known to git.

error: pathspec 'a' did not match any file(s) known to git.

error: pathspec 'bulletted' did not match any file(s) known to git.

error: pathspec 'list' did not match any file(s) known to git.

error: pathspec 'of' did not match any file(s) known to git.

error: pathspec 'change' did not match any file(s) known to git.

error: pathspec 'explanations...' did not match any file(s) known to git.

Easy to do right? Just an apostrophe in it's is all it takes…

I used to hack around this, lazily and at the expense of my commit messages I would use a backtick instead of quotes, and frequently omit grammatical apostrophes. Yuck! I need to change that right?!

Immediately I have three options or habits to adopt to prevent me from falling into this trap again:

Get off the command line and use a GUI

Stop writing commit messages inline and use an editor in my shell

Escape my quotes

Option 1: Use a GUI

As I alluded to earlier, this just doesn’t work for me, I admit from time to time I do, but since so much of what I do is driven from the terminal, it is my happy space and I’d rather not!

Option 2: Use an editor in terminal

I normally perform commits using the git alias git cm == git commit -m to save keystrokes… If I rather performed a git commit (without the -m modifier) that would open up vi editor and I could edit my message to my heart’s content before saving and exiting.

In all fairness, this is probably a good option, just a workflow I’m slightly less used to. It feels like more steps, more keypresses (to enter INSERT mode in vi, to do the old ESC, :wq sequence).

Yet, if I truly have to craft a message, I’d win from the extra editing power available. I should probably give this habit a 30 day trial :D

Option 3: Escape my Quotes

This is the screamingly obvious solution, but one that is less than intuitive to many, myself included.

Mixing Quotes

There is a false confidence solution, which is to mix your quotes… You can use doubles for the message and singles for anything you’re quoting inside the message, but this fails quickly when you use the dollar sign, since the terminal will expect that this is a variable:

ie.

1

git cm "This works Mr O'Gregor"

whereas:

1

2

3

4

$ git cm "This won't because of the dollar sign in scope.$apply the apply part will be interpreted as a variable - and omitted!"

[master fcce7f3] This won't because of the dollar sign in scope. the apply part will be interpreted as a variable - and omitted!

1 file changed, 0 insertions(+), 0 deletions(-)

create mode 100644 myfile.txt

And thats where it gets dangerous. Given how often I refer to $ prefixed terms in my messages, this is likely to catch me out and since it does actually perform the commit I may not even notice!

Yes, I can escape the $, as in git cm "\$moneydollars" but there is a big chance I may not remember to.

Using Single Quotes

The terminal interpolation of variables makes double quotes dangerous and fortunately using single quotes around the commit message prevents this.

1

2

3

4

$ git cm 'this will not interpolate: $example'

[master 7580885] this will not interpolate: $example

1 file changed, 0 insertions(+), 0 deletions(-)

create mode 100644 myfile.txt

Great stuff, no PEBKAC error happening, so this is a much better habit to get into, but what about apostrophes?

I’ll still need to escape them but this won’t cut the mustard:

1

git cm 'It\'s... not going to work'

While simply escaping the offending character is inuitive to most C style coders, thats simply not how the terminal parses the statement.

It will believe my statement is ongoing and the terminal will offer me a continuation prompt >.

What will work is if I escape it correctly for the terminal like this:

1

git cm 'It'\''s... going to work this time!'

Seemingly bizarre to the uninitiated, but try this out in your terminal:

1

2

$ echo 'hello'

hello

As expected, but now try this:

1

2

$ echo 'he'll'o'

hello

The terminal is parsing the string from the first apostrophe… When it finds the next it will look for another apostrophe and concatentate the contents, then it will resume parsing and concatenating until it finds the last apostrophe at which point it stops string parsing.

Please hit me up on the comments if you have a better way, I’ll be very keen to hear it!

]]>http://blog.stvjam.es/2016/11/using-quotes-in-git-command-line-commit-messages/#disqus_threadCreating an emanating buttonhttp://blog.stvjam.es/2016/11/creating-an-emanating-button/
http://blog.stvjam.es/2016/11/creating-an-emanating-button/Sun, 27 Nov 2016 00:00:00 GMT
Where I take the simplest part of Monument Valley (the emanating Complete button) and create a basic version with CSS.
Tonight I was re-visiting one of my all time favourite games, Monument Valley.

I found that just as the first time I played it, I was utterly dumbstruck by the simplicity and sheer elegance of the game. I played my way through the first few levels until I started to get that sudden urge to create something with a beautiful color pallette and subtle gradients, or to code some particle effects… or even mess about with HTML 5 audio and some zenlike string sounds.

I was inspired, perhaps not to create, but definitely to imitate… I was inspired to try take a little slice of that world and translate it into my world, the world of CSS and JavaScript and other technology utterly unsuited to creating a playable Escher painting with bizarre 3D calculations. Yes, I would need to start small…. Equally because I had a pizza in the oven and a movie lined up for once it was done, it would have to be a 15 minuter…

Ok, the button, how about that button that comes up at the poignant conclusion of every level? The roundy one with the emanating ring that is so ensō zen simple but sooo works?!

I wanted to use intrinsics as much as possible and didn’t want to go down the SVG route, so I am using a normal button dropping a heavy border on it and border-radius: 50% to turn it into a circle.

The next part I needed was the ring that emanates/ripples outwards. I didn’t want to write some wrapper div and I definitely didn’t want to add one with JavaScript, but fortunately a pseudo element served just fine.

I had to offset it’s position by the buttons border size, ie. position: absolute; top: -6px, left: -6px; and then applied an animation to it transformed the scale and modified the ring’s border width and opacity and what I thought would be sensible points in a 0-100% sequence.

Finally I had to assign animation-iteration-count: infinite; to keep it rippling away.

All in all, quite happy with the result and it was fun to grab something I liked and try and imitate it, next time I’ll try make it something a bit more impressive, but hey, I enjoyed it :D

]]>http://blog.stvjam.es/2016/11/creating-an-emanating-button/#disqus_threadMigrating to Hexohttp://blog.stvjam.es/2016/11/hello-world/
http://blog.stvjam.es/2016/11/hello-world/Fri, 25 Nov 2016 00:00:00 GMT
<p>Well it looks like I’m finally pulling the trigger on my old Blogger blog and moving it over to Hexo :D … It’s not you Blogger, it’s me - I just infinitely prefer writing in markdown and git based publishing!<br>
Well it looks like I’m finally pulling the trigger on my old Blogger blog and moving it over to Hexo :D … It’s not you Blogger, it’s me - I just infinitely prefer writing in markdown and git based publishing!

UPDATE: That was possibly one of the easiest transitions I could have hoped for, if anyone is interested I followed these steps:

Modified my Blogger template with pseudo-canonical redirects using technique shown in How to Switch from Blogger to WordPress…While the Blogger=>Wordpress article is great, the necessity of using JavaScript to redirect in lieue of Blogger offering a 301 service is unfortunate. Also I refer to the links as pseudo-canonical because since we are unable to manipulate strings in the Blogger template, the solution revolves around passing a query string param that the new blog can parse and in turn redirect to the correct post. This is not really canonical, as the true canonical url would be the new blogs permalink - not some transient url with a param. I added some JavaScript to modify the <link rel="canonical"> in the Blogger document head to the correct value, but am unsure whether crawlers will pick this link up pre, or post JS execution. Bearing in mind that Google does execute JS, but I’m not sure at exactly the stage it does this and for what type of scraping.

Here is my best attempt at bending template to my will… A 301 would be far preferable!

The rest was tweaks and experimentation, I have settled on using the Hexium theme, and am just modding it to fit my needs - which so far has proven easy enough. If you’re leaning towards the same theme, check this issue which may still help you out if you get in a pickle with funny boilerplate messages

]]>http://blog.stvjam.es/2016/11/hello-world/#disqus_threadMocking/Stubbing CommonJS Dependencies with Browserify and Karmahttp://blog.stvjam.es/2015/05/mockingstubbing-dependencies-for-unit/
http://blog.stvjam.es/2015/05/mockingstubbing-dependencies-for-unit/Mon, 18 May 2015 23:00:00 GMT
Mocking, stubbing and strategies for dependency injection are often overly complex parts of the JavaScript test code we have to write. But they help us isolate the unit that we want to test. In this article I look at using <code>proxyquire</code> and <code>proxyquireify</code>.
Mocking, stubbing and strategies for dependency injection are often overly complex parts of the JavaScript test code we have to write. But they help us isolate the unit that we want to test. Since CommonJS modules often act as a natural seam for a unit, it makes perfect sense that test frameworks like Jest automatically mock CommonJS dependencies.

While Jest looks absolutely awesome and can do other great things like like parallel testing, runAllTimers and provide promise helpers, Karma and Jasmine are still my weapons of choice and firmly fastened to the old utility belt… Especially now that I’ve found the karma nyan reporter ;)

But I’m not here to talk about karma reporters, as colourful as they can be…

So how do we stub CommonJS Dependencies?

proxyquire is a super easy to use proxy for requiring modules, that takes an object literal of stubbed dependencies as it’s second argument.

In the following example we’re going to test a member service, which has a create method and for the purposes of testing we want to isolate it from the ajax service call.

member.js

1

2

3

4

5

6

7

8

9

10

11

12

var ajax = require('./ajax');

var member = {

create: function(firstName, surname) {

return ajax.post({

firstName: firstName,

surname: surname

});

}

};

module.exports = member;

In our test spec, we’ll require member.js via proxy and stub out the ajax service to return a successfully resolved promise containing a member number.

member.spec.js

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

var proxyquire = require('proxyquire');

// use a helper to return a promise like the ajax post

// method would

var promiseStub = require('promiseStub');

var ajaxStub = {

post: function() {

return promiseStub.success('MEM1234');

}

};

var member = proxyquire('../member', {

'./ajax' : ajaxStub

});

describe('using proxyquire', function() {

var memberNumber;

beforeEach(function (done) {

member.create('Peter', 'Rabbit').then(function (data) {

memberNumber = data;

done();

});

});

it('subsitutes the required module for the stub', function() {

expect(memberNumber).toBe('MEM1234');

});

});

It’s worth noting that the path to the stubbed module (ie. './ajax') is relative to the location of member.js, not member.spec.js.

Browserify + proxyquire = proxyquireify

@tlorenz, the smart guy behind proxyquire also put together proxyquireify for use with browserify, and the usage is near identical to the above CommonJS examples except for a couple of reference changes.

Heres a walkthrough of steps to get it working with Browserify and Karma :

Install the proxyquireify node module

npm install proxyquireify --save-dev

Edit your karma.conf.js file

put var proxyquire = require('proxyquireify'); at the top of the file

in the browserify section, add a configure function to add the proxyquire plugin to browserify and set the root folder for the tests.

Using this you can check out the AWSM-1234-a-fantastic-feature branch with either

git lucky 1234

or

git lucky fantastic

Git Find

I’m pretty sure there are variants of this out there already, but this just shortcuts grokking through your branch list on your local repo to find a branch with some text

find = "!f() { git branch | grep -i " $1 "; }; f"

If I want to list all branches that contain the text ‘chart’ I can do git find chart and get that list

Personally this scratches an itch for me, I used to use git br | grep -i whatever and then git co, but being able to do it in one is pretty handy, and in quite a few cases, git lucky is all I need if I know something unique about the branch name.

]]>http://blog.stvjam.es/2014/12/im-feeling-lucky-for-git-command-line/#disqus_threadA beginners guide to starting a new web app with Karma, Jasmine and RequireJShttp://blog.stvjam.es/2014/05/lab-beginners-guide-to-starting-new-web/
http://blog.stvjam.es/2014/05/lab-beginners-guide-to-starting-new-web/Mon, 26 May 2014 23:00:00 GMT
Using the mighty Karma as a test runner for non-Angular projects from scratch, highlighting a couple of common errors.
Karma test runner is a really simple and relatively easy way to run JavaScript across multiple browsers, automatically test code changes (using watchers) and it also integrates neatly into task runners like Grunt, so it can be chained nicely into a Lint > Build > Test > Distribute type of process. Coupled with this, it really plays well with RequireJS…

The only problem is, when things go wrong with misconfiguration you can run into errors like :

There is no timestamp for /base/src/someScript.js

or

Mismatched anonymous define() module: ...

So the first time you’re setting it up, it helps to understand those errors and how to patch them up and also to know the sequence of configs and package installations to get off the ground. If you’re just interested in those errors see Troubleshooting.

In this lab I’ll walk through (in fairly painful detail!) setting up a web app and all the scaffolding required. Each step has been committed to a git repository for reference, so after initial setup you can advance through the lab manually (recommended!) or use git to checkout each step. view steps on github

Step 0 - Prerequisites and Initial Setup

This lab assumes you have npm and git installed and that you are familiar with both.

To get started with the lab, clone the repository and checkout step 0 - basic structure as follows:

In your dev folder, run the followinggit clone https://github.com/stephen-james/lab-karma-require-jasmine.git

Change directory to the cloned repocd lab-karma-require-jasmine

And checkout the starting point of this lab, step 0…git checkout step0

While everyone has their own preferred folder structure for web apps, for the purposes of this lab we’ll be using the following basic structure :

1

2

3

4

/src

JavaScript source folder for the sample web application created in this lab

/test

the test driver's (Karma) bootstrap for RequireJS and contained specs for this lab

Step 1 - Initialising the Web Application as an npm package

Its typical in a web app that we’ll have quite a few dependencies, but we don’t want to commit these to our repository. Its way cleaner to use a package manager that stores a list of dependencies and to exclude them explicitly using a .gitignore file.

For this lab, we’ll be using npm as the package manager. So we should create a .gitignore file to ignore any dependencies it loads in node_modules

Next up we’ll create the package.json file for this project using npm. While we’d never want to actually publish this sample app on npm, this will help us describe the project and in future steps will also list dependencies. more information on package.json

You can follow along through the npm init wizard, or alternatively just create the package.json manually.

Using npm init

Run npm init from the root folder of the web app, follow the sample below…

Sample : using npm init from comand line

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

C:\dev\lab-karma-require-jasmine

>npm init

This utility will walk you through creating a package.json file.

It only covers the most common items, and tries to guess sane defaults.

See `npm help json` for definitive documentation on these fields

and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and

save it as a dependency in the package.json file.

Press ^C at any time to quit.

name: (lab-karma-require-jasmine)

version: (0.0.0)

description: A step-by-step lab for setting up a simple JavaScript centric web a

We’re referencing RequireJS in the location that npm has installed it for us and telling Require that it should bootstrap the app by running src/main.js

If we were to run the web app now, we’d get a 404 because src/main.js doesn’t exist yet and RequireJS would throw a Script error.

We need to set up our RequireJS bootstrap…

Step 3 - Setting up the RequireJS bootstrap and App entry point

In the /src folder we’re going to create two files, main.js and app.js. main.js will contain the JavaScript module configuration for RequireJS and app.js will be the real entry point to the web app, which will be fired up once RequireJS has performed it’s magic.

Install jQuery, it’ll serve as an example dependency for our app

npm install jquery --save

Create src/app.js and src/main.js

In main.js, we’re configuring RequireJS, telling it where to find jQuery and that once it’s configured the modules it should launch our app, by calling app.start()

If we point a browser to index.html we should now see the simple message that the app has started up, coming from app.js

this.target.html("App Started!");

Step 4 - Getting some Karma!

Now that the initial strawman is there for our web app, lets get that test runner going and start putting in some test specs!

We need to install karma as a development dependency of the application

npm install karma --save-dev

To run the karma client from the command line, we must install it globally

npm install karma-cli -g

Karma requires a karma.conf.js configuration file, which we can write to a file manually or create using the console ‘wizard’ by running karma init.

Using the Karma init console wizard

Run the following from the command line

karma init

selecting the following values :

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

C:\dev\lab-karma-require-jasmine

>karma init

Which testing framework do you want to use ?

Press tab to list possible options. Enter to move to the next question.

> jasmine

Do you want to use Require.js ?

This will add Require.js plugin.

Press tab to list possible options. Enter to move to the next question.

> yes

Do you want to capture any browsers automatically ?

Press tab to list possible options. Enter empty string to move to the next quest

ion.

> PhantomJS

> Chrome

>

What is the location of your source and test files ?

You can use glob patterns, eg. "js/*.js" or "test/**/*Spec.js".

Enter empty string to move to the next question.

> node_modules/jquery/dist/jquery.js

> src/*.js

> test/**/*.spec.js

WARN [init]: There is no file matching this pattern.

>

Should any of the files included by the previous patterns be excluded ?

You can use glob patterns, eg. "**/*.swp".

Enter empty string to move to the next question.

> src/main.js

>

Do you wanna generate a bootstrap file for RequireJS?

This will generate test-main.js/coffee that configures RequireJS and starts the

tests.

> yes

Do you want Karma to watch all the files and run the tests on change ?

Ignore the warnings about non-matching files, provided you got the paths right that just means that those paths don’t contain any test specs yet.

Karma has now created a config file which describes how we want to Karma to perform karma.conf.js and an entry point for RequireJS (for Karma, not our app) test-main.js. To keep things clean, move the RequireJS entry point/bootstrap file to the test folder so that our structure now looks like this :

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

/node_modules

/jquery

/karma

/karma-...

/...

/requirejs

/src

app.js

main.js

/test

test-main.js

karma.conf.js

package.json

Edit karma.conf.js to look for test-main.js in the test folder.

1

2

3

4

files: [

'test/test-main.js',

...

]

Edit test/test-main.js to look like the following

We’re changing a couple of values from the boilerplate template, setting the baseUrl to be /base/src so that dependency definitions are consistent between main.js and test-main.js and adding paths for jQuery and ensuring that the test folder has a relative mapping.

Start Karma up from the command line

karma start

Because we chose to have Chrome and PhantomJS as browsers for testing, the testrunner will show feedback in both the command line and a launched Chrome instance.

Step 5 - Creating a Spec

Create a subfolder test/app and create spec file startup.spec.js.

Karma’s watcher will pick this spec up automatically and execute it

Troubleshooting

Two errors you can get pretty easily through misconfiguration are :

There is no timestamp for /base/src/someScript.js

Mismatched anonymous define() module: ...

No Timestamp Errors

This indicates that Karma doesn’t have that file in it’s file list. All files need to be included in this list, the app files, library files / dependencies and test specs too. If a file is missing from this list, Karma will put up a warning for that particular file.

So in our sample app, if we modified our filelist in karma.conf.js from :

1

2

3

4

5

6

files: [

'test/test-main.js',

{ pattern: 'node_modules/jquery/dist/jquery.js', included: false },

{ pattern: 'src/*.js', included: false },

{ pattern: 'test/**/*.spec.js', included: false }

]

to this (removing the jquery reference):

1

2

3

4

5

files: [

'test/test-main.js',

{ pattern: 'src/*.js', included: false },

{ pattern: 'test/**/*.spec.js', included: false }

]

Karma’s web server won’t serve the file and we’ll get error output similar to :

Mismatched anonymous define() module

Again this is an issue with karma.confg.js, when we’re using RequireJS with Karma we must make sure that any files that will be required as dependencies from the test specs are included in the file list with the included: false option. This will make sure that the script is not loaded twice (which causes the mismatch).

1

2

3

4

5

6

7

// list of files / patterns to load in the browser

files: [

'test/test-main.js',

{ pattern: 'node_modules/jquery/dist/jquery.js', included: false },

{ pattern: 'src/*.js', included: false },

{ pattern: 'test/**/*.spec.js', included: false }

]

If you want to see this in action, we can simulate this problem by switching included:true on src/*.js using the lab files for an example.

Another thing to ensure is that the Application’s RequireJS bootstrap main.js is in the exclude setting.

1

2

3

4

// list of files to exclude

exclude: [

'src/main.js'

]

Using Jasmine 2.0 with Karma

At the time of writing this, Karma installed Jasmine 1.3.1 by default, if you want to use version 2.0 of Jasmine, make sure you specify the Karma version to be 0.2.0 or higher… more info on Karma with Jasmine 2.0 here

]]>http://blog.stvjam.es/2014/05/lab-beginners-guide-to-starting-new-web/#disqus_threadCode Snippet Happinesshttp://blog.stvjam.es/2014/02/code-snippet-happiness/
http://blog.stvjam.es/2014/02/code-snippet-happiness/Thu, 06 Feb 2014 00:00:00 GMT
The day I first caught a glimpse of snippet satori
When I was a kid eager to jump to our XT computer with a massive 20MB harddrive to play text based RPGs (where @ was the lead character and # was a wall) my mother devised a cunning way to curb the habit. I could play when I wanted, but before I could play I had to do 10 minutes of Typing Tutor.

Fair, but I hated it.

But in hindsight, not a bad thing - I can type faster than I need to and I still had a childhood rich with many hours of hacking around in DOS, playing text games and levelling up my RPG characters.

I’m of the opinion now that being able to type fast, just like being able to talk fast, can at times be detrimental. For stuff to come out right the first time without tumbling over itself it needs thought through and structured. Rushing to express something and simultaneously create structure for optimal delivery often muddies and flusters thoughts. Like when you’re pairing and the keyboard operator subconsciously pauses mid sentence or releases a long “Ummmmm…” while they’re fixing brackets and white space.

I was watching a Scott Allen video the other day and was once again impressed by his clear, concise communication and almost effortless live coding skills. He worked the keyboard like a magician, shortcutting ReSharper, using prop and ctor snippets frequently. Code spilled onto his screen with just a few keystrokes and was elegantly refactored with even less.

It suddenly dawned on me that that is what I was missing, I had an epiphany :

Snippets aren’t just for slow folks!

My fast typing had held me back: I had dismissed snippets as useless and more effort than they’re worth, but here was evidence that they clearly were worth it.

I rushed to Tools > Code Snippets Manager and started exploring which ones were out there that I could use. Inspired I made a few more for JavaScript stuff I commonly do and I have a feeling I’ll be making loads more while I build up my efficiency.

Handy JavaScript Code Snippets

There are a few of the snippets I’ve put together for common JavaScript stuff, like Immediately-Invoking functions, Revealing Module Pattern and jQuery Document Ready wrapper function here.

But creating them is so easy and they can be so easily personalised to fit into how you as an individual dev think, that I’m kicking myself I haven’t used them more extensively before :)

Useful to know

To represent a $ in a code snippet you have to escape it using a double $$ (because $ is used to delimit variables)

]]>http://blog.stvjam.es/2014/02/code-snippet-happiness/#disqus_threadLogging Custom Objects and Fields with log4nethttp://blog.stvjam.es/2014/01/logging-custom-objects-and-fields-with/
http://blog.stvjam.es/2014/01/logging-custom-objects-and-fields-with/Thu, 23 Jan 2014 00:00:00 GMT
A detail of my personal experience using log4nets custom layout patterns and converters.
I’ve used log4net in several projects to help with logging and debugging, but a unique requirement came up where I wanted to log not just human readable strings but to use the logging capability to store information about the actions users are performing on a MVC4 application.

I wanted to track the User, the Controller and the Action the user invoked, how long it took and if there was any error that occurred. So I created a custom class to represent this called ActionLoggerInfo and an custom ActionFilterAttribute which I applied as a global filter which called log4net to log the object.

Quite importantly the log4net log methods (Debug, Info, Warn, …) accept message as object and not as a string, which allows for neat decoupling of serialization to happen via configurable layout patterns and converters.

Overriding the ToString method of my custom ActionLoggerInfo class got me some of the way because when the default converters were called, they would invoke the ToString override and I could serialize the object to Json, but what I really wanted was for the individual fields of the custom info object to be saved in seperate columns of a table so that I could cobble together some very simple usage reporting for the site. Since I was targetting a SQL environment having a message string that would have to be parsed for each logged row simply wouldn’t cut it.

I needed a way to map individual fields in the log to columns in an AdoNetAppender.

Out the box, log4net provides a bunch of conversion pattern names that can be used in the log pattern string to shape your log, written in printf style (for example %message). To solve the problem I needed to introduce a new named pattern to represent my custom object and then be able to address individual fields within it.

Adding a named Conversion Pattern

The route I took to do this was to create a custom PatternLayout class in order to introduce the new conversion pattern name %actionInfo for the ActionLoggerInfo class.

You’ll notice that there is no link to ActionLoggerInfo directly in this pattern, it only serves to establish the name "actionInfo" and the converter ActionConverter to be used to perform the conversion.

The Converter

The converter is responsible for handling conversion requests of a LoggingEvent when log4net calls it internally to render a given pattern name to the writer. The converter can then inspect whichever aspect of the LoggingEvent it needs to retrieve the value to write, including the MessageObject that was originally logged.

Since we’re passing the custom object in as the MessageObject we have everything that we need to inspect the object and pull out the fields that we want to log wh.

Patterns in log4net follow a %conversionPatternName{option} syntax. Where the conversionPatternName is used to identify which converter to use from the PatternLayout‘s list of converters, and the option in squiggly brackets allows you to pass additional information.

In the case of a custom object this gives us the mechanism we need to specify which field of an object we are wanting to render. eg. %actionInfo{controller}

The option in the %<conversionPatternName>{<option>} pattern syntax is extracted and mapped to ActionConvert.Option when the converter is invoked, so it is trivial to inspect it and return the field you want.

Using a bit of Reflection it would be easy to make a generic object converter that could reflect to extract the values properties of an object based on the option parameter, but for my purposes the switch statement was adequate

Configuring the AdoNetAppender

We have to modify the commandText of the appender to contain the new database fields and parameters for the custom object’s fields.

Each parameter is then specified explicitly with our custom PatternLayout (in this example MyApp.ActionLayoutPattern) and with a conversionPattern that uses the custom conversion pattern name (in this example %actionInfo) and the field {controller}.

]]>http://blog.stvjam.es/2014/01/logging-custom-objects-and-fields-with/#disqus_threadAdding forgotten files to previous commits with Githttp://blog.stvjam.es/2013/09/adding-forgotten-files-to-previous/
http://blog.stvjam.es/2013/09/adding-forgotten-files-to-previous/Wed, 11 Sep 2013 23:00:00 GMT
We all do it, in the heat of the moment we stage badly, <code>git commit --amend -C <SHA></code> to the rescue!
It’s incredibly easy to forget to stage all your files for a given commit. Whether its an untracked file that you only realise should be tracked few commits down the road, or if you’re divving up changed files to stage and commit them logically and in doing so leave a file out accidentally, its something that is bound to happen sooner or later.

In my team we’ve decided that we’re going to strive for more granular commits to make our repository version history more meaningful, as a result the above has been happening to me a lot more often than usual!

In the past I would just create a new commit and include the file with some sort of apologetic comment, but thinking of the bigger picture it makes sense to maintain a brighter, cleaner repo where contributors can theoretically checkout any revision and for it to be exactly what it says on the tin (ie. the commit message and description).

Adding an unstaged file to a previous commit

So you’ve run git status and found out that you have an unstaged file that should be added to a previous commit.

When you're happy that everything is well staged and cleanly committed, push to the upstream repogit push

And thats a really simple easy way to do it. Of course if you want to test each revision and the staged changes (which is always a good idea before pushing upstream) you'd want to checkout to that revision first and run your tests.

Amend and it's parameters

--amend will add the staged files to the commit with the specified hash, but optionally will allow you to modify the commit message.

If you want to pop in a completely new message,

git commit --amend -m "new commit message" <revision hash>

or if you want to leave it untouched, use

git commit --amend -C <revision hash>

(as we did in the example above)

or if you want to edit this on the fly in your terminal/console with vi, just leave out the -C argument and it’ll bring that up for direct editing.

So to start lets throw out any idea of trying to center objects using margin : -n px and variants thereof and look at three ideas that work regardless of element heights.

Method 1 - Display as a Table Cell

If you grew up in the bad old days of table layouts, you will be very familiar with the fact that table cells have a useful ability to center inline elements vertically, by setting the vertical-align property

Of course now we’re older and wiser and despise tables for displaying anything that isn’t tabular, but we can make a div behave like a table cell and vertically align stuff using display : table-cell .

Naturally loads cleaner than the extra table tags, but it still doesn’t sit 100% well with me. It isn’t really a table cell, it doesn’t sit in a table and the only reason I am styling it as such is to take advantage of one behaviour and that is vertical alignment. Still, it is a method that works consistently across a wide range of browsers (old and new).

But what about a purer way? How about the purpose built CSS Flexible Box Layout Module which exposes css properties for alignment and justification.

The only trouble is which version to use due to irregular browser support.

Method 2 - Flexible Box Layout Module (2009 Draft)

Here we only use the bits we really need from the flexible box module. In other words, to say we’re looking to use the flexible box module display : box, we want a horizontal layout box-orient : horizontal and we want everything aligned centrally. box-align : center. With browser specific prefixes this becomes a bit more verbose, but you get the idea.

For uses of the draft beyond just vertical alignment, take a look at the W3C 2009 Draft

Its worth noting that you can use this version of the module with legacy browsers by including the flexie.jsshim

Method 3 - Flexible Box Layout Module (CSS3)

Probably the best description of the CSS3 flexible box module can be found in the MDN documentation. Worth noting is the comment at the bottom of the document : Firefox supports only single-line flexbox. To activate flexbox support, the user has to change the about:config preference “layout.css.flexbox.enabled” to true.

Our implementation is similar to the Flexbox Draft solution in Method 2 except for property names which have changed with the specification… So again, we’re saying “hey, using the box layout module, set this to row/horizontal and centrally align it”.

At the time of writing this, this worked only with Chrome and Firefox (with the about:config preference changed mentioned above - which I doubt our users will have done).

Wrap Up

Which method you choose should really be all about your target browser audience. If you’re needing to reach everyone with the minimum amount of performance impact (i.e. without including shims etc), the old table-cell hack in Method 1 may actually be the best. If you’re in the rare position of working for a future audience who use modern browsers, Methods 2&3 are where its at. To cover more bases, you could use Modernizr, create CSS rules that work with the “flexbox” class it will pop on all browsers that support it and use methods 1, or 2&3 accordingly.

To do this I created a few javascript classes to support a simple tree datastructure DataStructures.Tree. You can find the source, an example using d3 to illustrate the hierarchy and tests on github.

Tree has a root node, the ability to find a node in itself (given an optional matching function) and importantly a factory method that can create a tree given a flat self referencing table. DataStructures.TreeNode has children reference which is instantiated as a DataStructures.TreeNodeCollection.

To simplify traversal I made Tree able to iterate through its nodes by using a depth first non-recursive approach (by using a queue) to finding the next node.

Once I had the initial structure in place, the next task was to convert from this Tree structure and its attributes (which had a lot to do with traversal and relationships), back to a simple leaner json object, so I added a toSimpleObject method, which also allows for a decorator function to be passed in to do bespoke formatting/manipulation.

This was particularly useful for the d3 case that I wanted to use this with (see example in index.html), where I needed to add size values to leaf nodes and remove empty children attributes to indicate which nodes are actually leaves.

There are quite a few potential improvements I’ve already identified, in the building of the tree from csv I assume that parent nodes already exist when I come across a child row, and the algorithm could and should be optimized, but for now, I’m getting what I want from it!

If you’re interested, theres a set of unit tests written with QUnit to illustrate how it all fits together in the repo /js/lib/tests.

]]>http://blog.stvjam.es/2013/03/building-tree-in-javascript-to-convert/#disqus_threadCirclesque Iconsethttp://blog.stvjam.es/2013/03/circlesque-iconset/
http://blog.stvjam.es/2013/03/circlesque-iconset/Tue, 05 Mar 2013 00:00:00 GMT
<div class="separator" style="clear: both; text-align: left;">A basic iconset I created for the <a href="http://www.pmsi-consulting.com/" ta
A basic iconset I created for the PMSI pricing project. Available in .svg and .xar. Requires Kai Bold font for the translation icon.

]]>http://blog.stvjam.es/2013/03/circlesque-iconset/#disqus_threadAutomating Rasterization of HTML elements on a Pagehttp://blog.stvjam.es/2013/01/automating-rasterization-of-html/
http://blog.stvjam.es/2013/01/automating-rasterization-of-html/Wed, 16 Jan 2013 00:00:00 GMT
<p><a href="http://phantomjs.org/" target="_blank" rel="external">PhantomJS</a> is a headless Webkit browser. In other words, it’s all the
PhantomJS is a headless Webkit browser. In other words, it’s all the awesome of Safari/Chrome without the UI and is capable of injecting Javascript into a loaded page, evaluating expressions within the browser sandbox and screen capture.

Its that last little bit that I’m really interested in, screen capture, so I headed along and found this simple rasterize.js script (provided by the creator of PhantomJS) which is good for taking full screenshots of your web-browser, but not so good if you are wanting to only take a screenshot of a particular element.

So I wrote a script called rasterizeElement.js which takes advantage of the webpage object’s clipRect property and sets the clipping rectangle according to the boundaries of the element you are selecting.

You can see the guidelines just by running the script with phantomjs rasterizeElement.js

the selector parameter, renderElementBySelector, can be any valid CSS selector because under the hood this gets passed to document.querySelector() method to get a reference to the element in the DOM.

Bits and Bobs Under the Hood

If you are going to modify or want to work with the rasterizeElement script, here are some things that are worth a mention…

I was unable to get an realistic clipping rectangle for the selected element until I used the page.evaluate function. This was due to my PhantomJS ignorance, it makes sense that evaluation of the webpage happens in this controlled sandbox.

If you want to use the optional arguments viewPortsize and paperSize, you will have to escape your object literals in the command line.

Refer to the original file rasterize.js to see acceptable papersizes if you are wanting non A4 pdf

Ping me on Twitter (@stephenhjames) if you have any suggestions or modifications to this script, or fork the gist on github and drop a comment