I’ve made a number of updates to the demos. The tutorial demo has been updated to do server side rendering. This means that it is able to be used by clients which either don’t support or have turned off JavaScript.

The second demo is a calendar. Unlike the tutorial which is a single file, this application is organized in a manner more consistent with how I expect projects to be organized.

I’ve made a number of updates to the demos. The tutorial demo has been updated to do server side rendering. This means that it is able to be used by clients which either don’t support or have turned off JavaScript. To run:

Visit the URL (typically http://localhost:4567/) and enter a comment. Visit the same URL in a different tab or a different browser and enter another comment. Switch back to the original browser/tab. If you have client side JavaScript disabled, you will need to hit refresh.

Visit the URL (typically http://localhost:9292/). This will take you to the current month. Left and right arrows will take you different months (and update the URL). Unlike the tutorial which is a single file, this application is organized in a manner more consistent with how I expect projects to be organized.

2015-02-11T15:10:31-08:00tag:intertwingly.net,2004:3349DSL for JavaScript

Jeremy Ashkenas: “work towards building a language that is to ES6 as CoffeeScript is to ES5”… close, but—do it for [ES6+HTML+CSS], and you’ll win ;)

It occurs to me that there is a shortcut available. Let a library like React replace [ES6+HTML+CSS]. Then build a DSL for that library.

Jeremy Ashkenas: “work towards building a language that is to ES6 as CoffeeScript is to ES5”… close, but—do it for [ES6+HTML+CSS], and you’ll win ;)

It occurs to me that there is a shortcut available. Let a library like React replace [ES6+HTML+CSS]. Then build a DSL for that library.

But there is more. JSX can’t directly express iteration. Look at CommentList from the React tutorial. Instead you build up a list, and then subsequently wrap that list. For nested lists, it appears worthwhile to split out separate components. There is nothing wrong with doing that, but I will suggest that the primary reason to split out a component shouldn’t be to pander to the limitations of the programming language syntax.

In Ruby you can directly express iteration. So where a comment box in the tutorial takes four classes, an entire calendar month can be expressed in one.

And there is even more. Functions in JavaScript are the swiss army knives of programming language features. The can be used to express classes, blocks, lambdas, procs. But this flexiblity comes at a price. Ruby2JS can detect when idioms like var self=this are needed and automatically apply them.

The net is that I can write smaller, more understandable code. And in the process focus more on the problem I’m trying to solve.

Like with CoffeeScript, "It’s just JavaScript". The code compiles one-to-one into the equivalent JS, and there is no interpretation at runtime. You can use any existing JavaScript library seamlessly from Ruby2JS (and vice-versa). The compiled output is readable and pretty-printed, will work in every JavaScript runtime, and tends to run as fast or faster than the equivalent handwritten JavaScript.

Now I don’t expect to have the success or impact that CoffeeScript has had. But I can say that I’m having fun. And in the process, I’m seeing the benefits with applications I write.

HTML Imports in trouble as Mozilla doesn’t want to implement; Custom Elements OK even though Chrome is the only implementation?

Overall, Brian mentions four specifications, and crosses off three. Why not all four?

My take is that this talk lumps React in with others based on when it was introduced; but that it is fundamentally different from, say Angular.js as Angular.js is from jQuery. Compared to the alternatives, react is more imperative, and is based on a virtual DOM. It also can run in both the server and the client.

Brian suggests that you view source on http://brian.io/date-today/. What you don’t see when you do that is today’s date. I’d suggest that the ideal would be a page where you do see today’s date — even if JavaScript is disabled. And for you to be able to interact with that page in ways that involve the server.

I have my own page on which I would suggest that you view source: calendar-demo (Update: that site is down, try this static snapshot). Use the left and right arrow buttons to go to the previous and next months. Viewing source reveals that the page is delivered pre-rendered, and only after the content is delivered are script libraries loaded. Traversing to the next and previous months are pretty snappy despite the fact that there has been no optimization: in particular, there are no anticipatory prefetches. Nor is data retained should you go back to a previous month. Neither of these changes would be hard to implement.

Source is available in svn. Check it out, do a bundle update to get the dependencies, run rake if you want to run a few tests, and run rackup to start a local server.

I must say that being able to define a component with all of the rendering, client, and server logic in one place is very appealing to me.

Brian suggests authoring source in ES6, and targeting ES5. My preference would be to work towards building a language that is to ES6 as CoffeeScript is to ES5. At the moment, my experimentation along those lines is happening in Ruby2JS.

React Native looks worth watching. Perhaps as my calendar is using flexbox, I will be able to quickly build an Android or IOS equivalent.

2015-02-02T14:28:32-08:00tag:intertwingly.net,2004:3347Email addressesI have been telling all non-IBMers to not use my ibm.com email address for years, but this advice is routinely ignored. I’ve repeated the reaons behind why I ask this enough times that it makes sense for me to post the reasons in one place so that I can point to it.

I have been telling all non-IBMers to not use my ibm.com email address for years, but this advice is routinely ignored. I’ve repeated the reaons behind why I ask this enough times that it makes sense for me to post the reasons in one place so that I can point to it.

The back story is that 15 years ago I wrote some open source code in a programming language called Java. I don’t use that language much any more, but I understand that it remains popular in some circles. In any case, javadoc style comments encouraged sharing your email address, and my employer discouraged me from doing anything that would hide my relationship with them, so my email address was put out on the web.

The inevitable result is that I’m deluged with spam, most in languages I am not familiar with.

My personal email I have control over and the spam tools (all open source) I use are largely effective. I don’t have that option with my corporate email. As others within IBM don’t have this problem, I am clearly an outlier.

Over time, I was missing enough important work-related emails that I tought myself enough LotusScript to write a script that I can invoke as an ‘Action’. This script identifies emails that were sent from outside of Lotus Notes and places them into a separate folder. If I am alerted to the presence of a single email, and given enough information (like senders name and time it was sent) I can generally find the email; but in general people should assume that emails sent to my corporate email address from outside of IBM are never seen by me.

Another downside of this is that some of my IBM email is sent from service machines that don’t interface directly with Lotus Notes. That means that I miss some important updates. And important reminders. Eventually such reminders copy my manager, who sends them on to me.

Apparently there is plans in the works to migrate corporate email to the “cloud”. Perhaps that will be better. Perhaps I will need to find a way to reimplement my filter or equivalent. Or perhaps it won’t be something that I won’t need to worry about any more.

2015-01-28T08:48:39-08:00tag:intertwingly.net,2004:3346React.rb

Having determined that Angular.js is overkill for my blog rewrite, I started looking more closely at React. It occurred to me that I could do better than JSX, so I wrote a Ruby2JS filter. Compare for yourself.

Having determined that Angular.js is overkill for my blog rewrite, I started looking more closely at React. It occurred to me that I could do better than JSX, so I wrote a Ruby2JS filter. Compare for yourself. Excerpt from the React tutorial:

URL parsers consume URLs and generate URIs. Such URIs are not RFC 3986 complaint. I’d like to fix that.

TL;DR: URL parsers consume URLs and generate URIs. Such URIs are not RFC 3986 complaint. I’d like to fix that.

- - -

Let’s talk a bit about nomenclature.

On the web, particularly in places like values of attributes named href, there are things that people have, at various times, attempted to call web addresses or IRIs. Neither term has stuck. In common uses these are called URLs.

In between the markup and servers, there are user agents. One such user agent is a browser. Browsers don’t passively send URLs along, they reject some outright, and transform others. There should be a name for the set of outputs of the various cleanups that browsers perform.

Since browsers are programmable, you can directly observe this transformation. The WHATWG URL specification defines an API which has already been implemented by Firefox and Chrome, and is being evaluated by Microsoft and Apple. Create a JavaScript console and enter the following:

new URL("hTtP:/EXamPLe.COM/").href

The output you will see is:

"http://example.com/"

The output is clearly much cleaner and more consistent than the input. In fact, in this case the output is RFC 3986 compliant.

Unfortunately, in the general case, this isn’t true. Browsers (and more generally: other libraries like the ones found in pretty much every modern programming language) can produce things that aren’t RFC 3986 compliant.

I’m looking at every browser and every library I can. I’m specifically looking for differences. In some cases, I’m pointing out where such outputs are clearly wrong and need to be fixed.

In other cases, the output may not be RFC 3986 compliant, but actually are useful and actually work. What this means in practice is that the set of things that consumers need to be able to correctly process is not defined by RFC 3986 but by these tools.

People can learn this the hard way by starting out to implement RFC 3986 and then finding that they need to reverse engineer other tools. We can do better. We can set out to update RFC 3986 or otherwise document what the actual set of inputs that can be expected to interoperably process is.

In general, I have found that it isn’t difficult to talk about places where RFC 3986 can be tightened up. Where there has been push-back is exploring any notion of loosening the definition. The reaction generally is expressed along the lines of “doing so would break things”.

I can see how some see such a position as reasonable. I don’t, and I’ll tell you why. What is effectively being said is that documenting how things actually work will break things, which is clearly untrue.

What such an effort will do is not break things, but uncover uncomfortable truths. To build upon an example from Dave Cridland, one such uncomfortable truth may be that the sets of things that everybody except LDAP schemas can handle is different than the sets of things LDAP schemas can handle.

There are three ways to handle that. One would be to change everybody to conform to what LDAP can handle. One would be to change LDAP. And one would be to document clearly that the set of things LDAP can handle and the set of things that everybody else expects to be handled are separate sets. Largely overlapping, yes, but not identical sets.

While documenting three sets (the set of things Chrome and other browser supports, the set of things HTTP and other protocols support, and the set of things LDAP supports) would not be my first choice, but it may be the only option available given the constraints.

If you look at those three sets, ideally each would be a proper subset of these that precede it. That’s not currently the case at the moment, but as I mentioned proposals made with clear rationale provided to tighten up RFC 3986 don’t seem to be getting much push-back.

What we need then it three names. URIs seem to be the obvious choice for name of the set of “things LDAP schemas support”. For better or worse, URLs seem to be the name that has stuck for the first set.

At this point, a number of people seeing an opening suggest IRIs as the name for the set in the middle. Um, no. Except for fragments, this set is 100% pure ASCII. The name for what IRIs attempted to define is URLs.

So this means that we need to define a new name. That’s not so bad, is it? It could be worse, at least we don’t have to define a cache invalidation strategy.

2015-01-17T10:55:26-08:00tag:intertwingly.net,2004:3344URL Work Status

The most likely path forward
at this point is to get representatives from browser vendors into a room and
go through these results and make recommendations. This likely will happen in
the spring, and in the SF Bay Area. With that in place, I can work with
authors of libraries in popular programming languages to produce
web-compatible versions. This work will take the form of bug reports,
patches, or — when required — authoring new libraries.

The most likely path forward at this point is to get representatives from
browser vendors into a room and go through these results and make
recommendations. This likely will happen in the spring, and in the SF Bay
Area. With that in place, I can work with authors of libraries in popular
programming languages to produce web-compatible versions. This work will take
the form of bug reports, patches, or — when required — authoring new
libraries.

Status by venue:

WHATWG

At the WHATWG, I’m limited only by my own ability to do the work
required. My biggest complaint remains that that the barrier to entry to
participate is too high. This. however, is something entirely under my
control to fix for the specifications I’m working on. I’m hopeful that
leading by example will cause others in the WHATWG to do likewise.

WebPlatform

I’ve had some success,
but virtually all of this is attributable to GitHub, not WebPlatform. At the
moment, technical issues prevent me from updating the spec there. These
issues started on December 24th and were promptly reported. If this
continues, I’ll push the webspecs develop branch to a whatwg develop branch
and migrate the
issues.

W3C

There has been no demonstrable progress in the WebApps WG. The TAG seems generally supportive. I
briefed the AB, but nothing has come
of that. Same is true for the
process CG. I’m willing to try proposing a new
working group. Failing this, I believe that I have all the evidence I
need to convince the W3C Director that normative references
to the Living Standard are the only viable alternative. As Sherlock Holmes
was known to say: when you have eliminated the impossible, whatever
remains, however improbable, must be the truth.

I’ve downloaded the multi-part zip archive for IE11 on Win10 for VirtualBox on OS/X from modern.ie. I’ve downloaded the single-file archive on both OS/X and Linux. I’ve verified the md5 signatures for each. Yet each time, when I try to unzip the result, I fail.

I’ve downloaded the multi-part zip archive for IE11 on Win10 for VirtualBox on OS/X from modern.ie. I’ve downloaded the single-file archive on both OS/X and Linux. I’ve verified the md5 signatures for each. Yet each time, when I try to unzip the result, I get the following:

This shows signs of integer overflow, so it seems likely that the problem is client side. Even with that said, choosing to make a this content available in a format for which there isn’t working client libraries available to unpack it isn’t helpful.

My original intent was to aggressively prune unnecessary function with the intent of producing a more maintainable result, but with the ability to have automated acceptance tests, this is now less of a concern.

I particularly like the comment that “It just works” was never completely true. My experience is that when working with open source codebases, doing so on an Linux operating system comes much closer to “It just works” than doing so on any other.

Not rack’s fault, but Sinatra hasn’t released in a while. Problem has been known since July, and a fix was merged into master in August. One possible workaround has been posted. An alternate workaround:

It has gotten so bad that Brendan Eich had to link to a web archive copy of a page of mine. I must say, however, that it is very ironic and amusing that it is was that particular page. General outline of my current approach:

I’ve clearly been neglecting my little spot on the web.

It has gotten so bad that Brendan Eich had to link to a web archive copy of a page of mine. I must say, however, that it is very ironic and amusing that it is was that particular page. The problem turned out not to be a software problem, but rather a (presumably inadvertent) DOS attack on feedvalidator.org, causing CGI processes to fail. Blocking the IP address in question caused the problem to clear up.

General outline of my current approach:

My interface to my weblog will no longer be Python/CGI application on a hosted server. Instead it will be a Ruby/Sinatra application on my private home server where keeping things up to date is much easier for me. That application will produce static HTML, CSS, StyleSheet, and a single feed, all of which will be rsync'ed to the public server.

The only services exposed will be search and comments. Comments initially be disabled, and when they return they will likely be moderated, though I may make the moderation queue publicly visible.

My current focus is a software update. The overall look and feel will (at least initially) remain the same.

The pages produced will be HTML5, though all pages may not always pass validation. Mike is 100% correct: different people can make different judgment calls. In particular, I continue to find that explicitly quoting all attributes and explicitly closing all elements both reduces authoring errors and enables a wider variety of user agents to parse the pages correctly.

I’ll likely drop many features that were popular at one time, but no longer appear to be. An example of this: OpenID.

Along the way, I’ve been named by my employer’s AC member to be a member of
the W3C WebApps Working Group,
and invited to become a member of the WHATWG
organization on GitHub.
I’ve been named as co-editor of the spec in both organizations, and at that
point the fun abruptly stopped. Apparently, the larger political issues that I
had successfully avoided in the past moved front and center.

While I am optimistic that at some point in the future the W3C will
feel comfortable referencing stable and consensus driven specifications
produced by the WHATWG, it is likely that some changes will be required to
one or both organizations for this to occur; meanwhile I encourage the W3C
to continue on the path of standardizing a snapshot version of the WHATWG
URL specification, and for HTML5 to reference the W3C version of the
specification.

Now it is time for me to spell out how I see that happening.

I’ll start out by saying that I continue to want the WebApps WG to follow
through on its charter
obligation to continue to publish updates to the URL Working Draft. And once updates
resume, I want to work on making doing so entirely unnecessary. While this may
sound puzzling, there is a method to my madness. I want to establish an
environment where an open discussion of this matter can be held without anybody
feeling that there are options that are closed to them or that there is a gun
to their head.

Next I’ll state an observable fact: there exists people who value the output
of the W3C process. The
fact that there are people who don’t doesn’t make the first set of people go
away or become any less important. Note that I said the output of the W3C
process. People who value that don’t necessarily (or even generally) want to
observe or participate in the making of the sausage.

What they value instead is regular
releases and making the bleeding edge publicly available. And for
releases, what they care most about are the items that are covered during a
W3C Transition (example).
In particular, they are interested in evidence of wide review, evidence that
issues have been addressed, evidence that there are implementations, and the
IPR commitments that are captured along the way.

Some have (and do) argue that these needs can be met in other ways. Not
everybody is convinced of this. I’m not convinced. In particular, the
existence of a bugzilla database with numerous bugs closed as WORKS4ME
without explanation doesn’t satisfy me.

To date, those needs have intentionally not been met by the WHATWG. And
an uneasy arrangement has been created where specs have been republished at
the W3C with additional editors listed, in many cases in name only. Those
copies were then shepherded through the W3C process. Many are not happy
with this process. I personally can live with it, but I’d rather not.

I said that this will require changes by one or both organizations. I
will now say that I expect this to require cooperation and changes by both.
I’ll start by describing the changes I feel are needed by the WHATWG, of
which there are three.

Agree to the production of planned snapshots. And by that I mean
byte-for-byte copies. As a part of this that would mean the
identification of "items at risk" at early stages of the process, and
the potential removal of these items later in the process. These
snapshots will need to meet the needs of the W3C, primarily pubrules,
and only linking to W3C approved references. Even though it should have
to go without saying, apparently it needs to
be said: those specs need to be snark free. Finally I'll go further
and suggest that those snapshots be hosted by the W3C, much in the way
that the W3C hosts WHATWG's bugzilla database and mailing list
archives.

Participation in the production of Transition
Requests. That would involve providing evidence of wide review and
evidence that issues are addressed. It also could include, but doesn't
necessary require, direct participation in the transition calls.

Understanding and internalizing the notion that the combination of an
open license coupled with begin unwilling or unable to address a
perceived need by others is a valid reason for a fork. Yes, I know that
the W3C hasn't adopted an open license themselves, and I believe that is
wrong too. But that doesn't change the fact that an open license plus
an unmet need is sufficient justification for a fork.

I’ll close my discussion on the WHATWG changes I envision with a statement
that participation in the W3C process (to the extent described by #1 and #2
above) is optional and will likely be done on a spec by spec basis. Editors of
some WHATWG specs may not chose not to participate in this process, and that’s
OK, I simply ask that those that don’t recognize the implications of this
choice (specifically #3 above).

Responsibility for advancing specs for which the WHATWG editors
voluntarily elect to participate in the process would fall to a sponsoring
W3C Working Group. Starting to sponsor, ceasing to sponsor, and forking a
spec would require explicit W3C Working Group decisions. As a general rule,
Working Groups should only consider sponsoring focused, modular
specifications.

Here’s what sponsoring would (and most importantly, would not)
involve:

No editing. As suggested above, snapshots produced by the WHATWG
would be archived, but these archives would be byte-for-byte beyond
the changes involved in archiving itself (example: updating stylesheet
links to point to captured snapshots of stylesheets). The one
possible exception to this would be in the updating of normative
references, but this would only be done with the concurrence of the
WHATWG editors.

That’s it. Of course, the process will remain the same for documents that
are copied and shepherded instead, but I see no reason that WebApps WG couldn't sponsor the
WHATWG URL standard through this
process, the HTML WG couldn't do the
same for the DOM standard, the I18N WG couldn't do the same
for the Encoding standard,
etc.

While everybody may come into a sponsorship collaboration with the best
intentions, we need to realize that things may not always go as planned.
There may be disagreements. It has been known to happen. When such
occurs:

Everyone involved should work very hard to resolve the dispute as
the consequence of breakage is very bad all around.

If no agreement can be reached, the W3C Working Group will likely
stop the sponsorship of the specific spec involved in the dispute.

If a Working Group stops sponsoring a spec, the Working Group could
still fork that spec - but that would be a suboptimal solution for both W3C and
WHATWG. It would also re-inflame the debates between organizations.

Nonetheless, since each organization has different criteria, we must
recognize that this could happen; especially for large, broad, complex
specs. Accordingly it makes sense for both organizations to continue the
trend towards smaller and more modular specifications

I have no idea if others are willing to go along with this, but I hope
that this concrete proposal helps anchor this discussion. I invite others
that are inclined to do so to suggest revisions or to create proposals of
their own. As an example, since the above describes an environment of
collaboration and sharing of work, perhaps co-branding may be worth
exploring?

This clearly will take time. As an editor of the URL specification, I’d
like to propose that it be the first test of this proposal. In the
meanwhile, I plan to spend my time coding.

The implementation is incomplete, in particular, much of the character encoding logic and IP address parsing is just roughed id at this point.

I’d like to propose a number of changes to the test results; mostly to more closely match existing browser behavior, and perhaps where possible to make the implementation logic less convoluted. Meanwhile, I felt that it was important to have a faithful baseline implemented so that I could experiment with changes and see if there were any unintended consequences to those changes.

More tests! There’s no such thing as too many tests.

Rewrite URL parser. I suspect that the railroad diagrams (converted to bikeshed?) plus the parts of the grammar contained in curly braces expressed in prose would be more comprehensible and maintainable than the current state machine approach.

I then wrote another script to take this data and pass it through what is advertised as a closely conforming implementation of the relevant RFCs.

Looking at the results, the first set of issues related to the stripping of leading and trailing whitespace, so I updated the script to do that to focus on the remaining differences. Similarly, the URL parsing definition includes the leading ? and # in the query and fragment values respectively, so I eliminated those differences in the cases where the values were non-empty.

Dreamhost upgraded my server to Ubuntu 12.04. I noticed things breaking in preparation for the move, and things that broke after the move. If you see something not working correctly, please let me know.

2014-09-29T16:45:08-07:00tag:intertwingly.net,2004:3334The URL Mess

tl;dr: shipping is a feature; getting the URL feature well-defined should not block HTML5 given the nature of the HTML5 reference to the URL spec.

This is a subject desperately in need of an elevator pitch. From my perspective, here are the three top things that need to be understood.

tl;dr: shipping is a feature; getting the URL feature well-defined should not block HTML5 given the nature of the HTML5 reference to the URL spec.

—

This is a subject desperately in need of an elevator pitch. From my perspective, here are the three top things that need to be understood:

2) The URL spec (from either source, per above it doesn’t matter) is as backwards compatible to rfc3986 + rfc3987 as HTML5 is to HTML4; which is to say that it is not. There are things that are specified by the prior versions of the specs that were never implemented or are broken or don’t reflect current reality as implemented by contemporary web browsers.

3) Some (Roy Fielding in particular) would prefer a more layered approach where an error correcting parsing specification was layered over a data format; much in the way that HTML5 is layered over DOM4.

—

Analysis of points 1, 2, 3 above.

1) What this means is that any choice between WHATWG and W3C specs is non-technical. Furthermore, any choice to wait until either of those reaches an arbitrary maturity level is also non-technical. It doesn’t make any sense to bring any of these discussions back to the HTML WG as these decisions will ultimately be made by W3C Management based on input from the AC.

2) In any case where the URL spec (either one, it matters not) differs from the relevant RFCs, from an HTML point of view the URL specification is the correct one. This may mean that tools other than browsers may parse URIs differently than web browsers do. While clearly unfortunate, this likely will take years, and possibly a decade or more, to resolve.

3) If somebody were willing to do the work that Roy proposes, it could be evaluated; but to date there are quite a few parties that have good ideas in this space but haven’t delivered on them.

—

Background data:

RFC 3986 provides for the ability to register new URI schemes; the WHATWG/W3C URL specification does not. URIs that depend on schemes not defined by the URL specification would therefore not be compatible. Anne has indicated a willingness to incorporate specifications that others may develop for additional schemes, however he has also indicated that his personal interest lies in documenting what web browsers support.

Meanwhile, this is a concrete counter example to the notion of the URL specification being a strict superset of rfc3986 + rfc3987. Producers of URLs that want to be conservative in what they send (in the Postel sense), would be best served to restrict themselves to the as of yet undefined intersection between these sets of specifications.

—

Recommendations:

While I am optimistic that at some point in the future the W3C will feel comfortable referencing stable and consensus driven specifications produced by the WHATWG, it is likely that some changes will be required to one or both organizations for this to occur; meanwhile I encourage the W3C to continue on the path of standardizing a snapshot version of the WHATWG URL specification, and for HTML5 to reference the W3C version of the specification.

Furthermore, there has been talk of holding HTML5 until the W3C URL specification reaches the Candidate Recommendation status. I see no basis in the requirements for Normative References for this. HTML5’s dependence on the URL specification is weak, and an analysis of the open bugs has been made, and a determination has been made that those changes would not affect HTML5. Furthermore the value of a “CR” phase for a document which is meant to capture and catch up to implementations is questionable. Finally, waiting any small number of months won’t address the gap between URLs as implemented by web browsers and URIs as specified and used by formats such as RDF.

Should a more suitable (example: architecturally layered) specification become available in the HTML 5.1 time-frame, the HTML WG should evaluate its suitability.

New laptop for work: MBP 15.4/2.6/16GB/1TBFlash. First time I ever went the Apple route. I did so as I figured with those specs, I could run multiple operating systems simultaneously. So far, so good. I’m using VirtualBox to do so.

New laptop for work: MBP 15.4/2.6/16GB/1TBFlash. First time I ever went the Apple route. I did so as I figured with those specs, I could run multiple operating systems simultaneously. So far, so good. I’m using VirtualBox to do so.

Notes:

First, Mac OS X 10.9. My biggest problem with previous versions of this operating system is that they always appeared to me to be fairly hostile to installing open source scripting languages and tools. For example, each time I updated my Rails book, I would update the instructions on how to install the necessary software. This now appears to be a thing of the past. In fact, the only problem I’ve encountered so far is with mod_suexec. That problem looks easy to address, and if it isn’t addressed by the team managing the brew recipe, I’ll simply compile the suexec bin myself.

Overall, much improved. This is also my first experience with Apple’s trackpad; and I must say I’m a fan.

Next up, Ubuntu 14.04. Installation was straightforward. One only needs to be mindful to install dkms. Enabling 3D acceleration is also worthwhile, but doesn’t quite get you to native graphics speeds on lesser hardware. The end result is fully functional, though it is worth while to do most web browsing on the host operating system.

Then Windows 8.1. This was by far the easiest as Microsoft provides time bombed VMs which you can easily import and use for up to 90 days. When the 90 days are up, you can import again and start over. I’ve now done this with both Ubuntu and Mac hosts.

Finally, Red Hat Enterprise Linux 6.5. There were a few more steps to get this running, and even after doing so the result wasn’t fully functional in that it would not use the full display even after installing guest additions. The solution ended up being to delete (or simply move elsewhere) the following files in the /etc/X11 directory: xorg.conf xorg.conf.d xorg.conf-vm. I use this VM to access the IBM VPN and to run Lotus Notes.

Joe Gregorio: But something else has happened over the past ten years; browsers got better. Their support for standards improved, and now there are evergreen browsers: automatically updating browsers, each version more capable and standards compliant than the last. With newer standards like HTML Imports, Object.observe, Promises, and HTML Templates I think it’s time to rethink the model of JS frameworks. There’s no need to invent yet another way to do something, just use HTML+CSS+JS.

I’m curious as to where Joe believes that these features came from.

Joe Gregorio: But something else has happened over the past ten years; browsers got better. Their support for standards improved, and now there are evergreen browsers: automatically updating browsers, each version more capable and standards compliant than the last. With newer standards like HTML Imports, Object.observe, Promises, and HTML Templates I think it’s time to rethink the model of JS frameworks. There’s no need to invent yet another way to do something, just use HTML+CSS+JS.

There’s actually a gradient of code that starts with a simple snippet of code, such as a Gist, and that moves to larger and larger collections of code, moving up to libraries, and finally frameworks:

gist -> library -> framework

A more complete picture:

gist -> library -> framework -> standard

And even that isn’t complete. Standards are backported using polyfills, and frameworks are updated to use feature detection to make use of standard implementations as they become available.

I’ll also mention a few libraries/frameworks I’m fond of, and how they fit:

Underscore.js. This library implements a number of methods that really should be a part of the language. And in a few cases, are (scan that page for the word native). I’ve been a member of ECMA TC39 off and on for a decade and a half, and based on what I have seen, JavaScript will catch up with Underscore in 30 to 50 years.

jQuery. Their official slogan is “write less, do more”. While that’s true, “make DOM suck less” is equally as true. Like with Underscore.js, it predated features like querySelector. One last comment before I move on: abstract away the platform is not true for jQuery (nor for any of the libraries/frameworks I’m mentioning here). The key abstraction jQuery provides is a collection of DOM nodes. You can determine the number of element in the collection by using the length property. You can access individual DOM nodes using indexes: [0], [1], [42], etc.

Bootstrap. While this project contains JavaScript, its true focus is on providing a higher level of CSS constructs than the browsers currently provide. Things like modal dialogs, dropdown menus, tabs, etc. It is worth noting that they do this with “just” HTML+CSS+JS. Sure, you can reinvent these concepts for yourself, but why?

Angular.js. Joe mentioned that he hasn’t needed data binding yet. I’ve written a fair amount of small web applications. Some have grown to become bigger and unwieldy. I’ve taken a few of these and started to separate out the client side model, view, and controller, and in the process found data binding to be quite handy. Now I can write larger web applications, and go back and add features months later without being afraid that I am going to break anything.

In each of these cases, I’m confident that the best ideas of these libraries and frameworks will make their way into the web platform. Meanwhile: