Dan Nordquist on Technology, Work, and Music

Main menu

Monthly Archives: May 2013

So there’s been, y’know, talk about JavaScript at work. With JavaScript encroaching on the server-side more and more, I’ve seen chatter about unsuitable it is for server systems.

I was one of those guys. I didn’t care if I could do it client-side. Give me a post-back and a code-behind and I’ll get it done. Or: hey, nice client-side implementation of that. You know the next Gecko or WebKit or IE will mangle it, right? Hope they don’t release a new version tonight.

My perspective today is that JavaScript is the language we’ve got. If you hate it, well, there are lots of jobs working on systems and applications that don’t touch web browsers. However, if web browsers are where your audience is, then you’d better get comfortable. I’d bet that most of your beef with JavaScript (if you have beef) is actually with browsers. They’ve been, well, inconsistent. But JavaScript is good (as in “flexible” and “expressive”), and getting better. (And I don’t see a real competitor for it.)

In terms of that encroachment, I think stacks like Node are really intriguing. I think more traditional stacks will definitely catch up, but I think it says a lot that the first group to patch together something like that (a server that was light, event-driven, and I/O intensive) used JavaScript (and V8) to do it.

So to address this, I’ve written this based on what I would have liked to have found as the #1 hit on Google after typing ‘WPF Tutorial’. This article may not be 100% correct, or even do things ‘the one true way’, but it will illustrate the main points that I wish I had found in one place 6 months ago.

Fairly good walkthrough of MVVM and why you would use it. (I had to ramp up yesterday, and spent probably too long trying to sort out the “why”.)

My thoughts:

I very much appreciated that the business objects were chosen so that “Name” would not be one of the business properties. MVVM adds enough new classes, where there are enough opportunities to misunderstand and poorly name things, that having these things completely separate is helpful.

The code is very elided. I missed the bold warning to download the code, and things were much smoother once I did. (Generally, web articles shouldn’t require that you download code samples, though.)

I like the idea of “write something quickly, in a naive way, then patch it up with best practices.” More articles should be written from this perspective.

However, it could be clearer when we’re doing something “quick and dirty” with the intention to clean it up later.

I wasn’t famliar with the “xmlns:local” namespace concept, so I spent a half-hour trying to figure that out.

I really liked the progression from not working, to working, to using a little more of the .NET framework, to growing our own framework. It made a lot of sense.

I still have a sense that MVVM frameworks replace a lot of brittle syntax (which, to me, is unfamiliar and disorienting) with a more stable, extensible platform (which, being new to me, is also unfamiliar and disorienting).

I think good next steps from this article are to find a framework that I can try out a little more deeply. Or, I may revisit knockout.js: with a better understanding of the problems it’s trying to solve, I may have better luck with it.

Good comments tell only what the code cannot express itself, such as why a particular technique was favored or the dangers of optimizing a block of code. Most other kinds of comments are simply noise, and their presence clutters of code, making it more difficult to understand and creating errors when, inevitably, the comments get out of sync with the code they reference.

Since comments are “good” and not writing comments is “bad”, more means better, right? This is a good look at why we write comments, and why writing more comments than we need will actually make things harder to understand down the line.

One of the things I used to write about at the last place was RSS. I think it’s an amazing technology. Applied correctly, it can connect you to the things that are important to you in your life, with much less effort than you’re used to spending.

I’ll make this perfectly clear: if you follow more than three websites, like can’t-miss-it must-read websites, you’re wasting an enormous amount of time by not using an RSS reader.

I was an early adopter of a lot of that technology. I’m not sure exactly when I was a Bloglines user, but I became a Google Reader fan pretty immediately after that opened up. (I got so excited about podcasting, one of the most inspiring applications for RSS, that I started a podcast in 2005.) I’ve had my ups and downs with RSS and feeds generally, but I’ve been a pretty steadfast user of Reeder over the past three-or-so years.

Soon, they’ll be shuttering Google Reader. I’m not exactly sure about the reasons, but I imagine the platform doesn’t perform well across paid-search metrics, so it’ll stop working. Any app (like Reeder) that has been tightly coupled to the platform will need to be refactored to deal with different providers.

For my part, now that I’m self-publishing again, I’m trying to embrace the “stream of news” and seeing what comes out of it. For that, I’m really enjoying Feedly. It’s available as a mobile app and, while there isn’t a web interface, per se, it works as a browser extension. I’m using it with Safari and Chrome, and I’m really impressed.

I’m not sure when it became kosher by the browsers, but it’s called a protocol-relative URL. If you make sites that serve pages up over http and https, then you’ve seen the need for this: to avoid a nasty (IE-only) security warning, you have to serve up assets that match the page.

Leaving the scheme off puts the browser in charge of asking for the assets the way that matches. Problem solved.