Tuesday, December 28, 2010

This commit to add registeration of jQuery as a module via the Asynchronous Module Definition (AMD) API has brought up some questions that I should have addressed previously before the commit happened. The jQuery team may decide to back out the commit, as is their prerogative, but I hope not. This post will be the justification for the AMD API in general and for its suitability for jQuery in particular.

What is the Standard?

There was a concern brought up by Kevin Smith that module loading is still up in the air, and that there is no "Standard" yet. To me a standard is something that is written down, understood by people, and has a good level of market adoption. Multiple implementations help too.

CommonJS Modules is a standard in that way. It is written down, understood by a few people, and has a a good level of market adoption via Node. There are a few implementations for other JS engines.

The CommonJS group also has a process where they have voted before to call it a Standard vs. a Proposal. Anyone can join the CommonJS discussion list, and I think it is still an ongoing discussion on how best to vote on proposals, who should get to vote, and if voting makes any sense. It is helpful to know if most of the people on the list like a particular proposal though, it gives some confidence that the proposal has been thought out.

I have a concern about the diversity of the list participants though. Most joined when it was still ServerJS, and the CommonJS modules "standard" does not work well for use in web browsers that use script src="" to load scripts. So, you could hear that "CommonJS Modules are a standard", but it does not mean it works well in all CommonJS environments, most notably, the largest distribution of a JS environment, the browser.

That is the reason the AMD proposal exists. It works well in the browser environment where module loading is by default asynchronous, and with script src="" loading (the easiest to debug and fastest way to load code), there is no control over modifying the script source before it executes. AMD is called a "proposal" in the CommonJS list because people on the CommonJS list have not voted to call it a "standard".

There is recognition on the CommonJS list that the CommonJS Modules 1.1 is not sufficient for script src="" browser loading, but there is some disagreement over how to solve it. AMD is one proposal. Kevin Smith has put together a Module Wrappings proposal, based on some discussions on the CommonJS list. The Wrappings proposal is incomplete -- there is more to it based on the list discussions. I will highlight the main difference between AMD and Wrappings below, but first, another module API contender:

ECMAScript Simple Modules. This is a "strawman", which is like a proposal in the CommonJS group: it is still something being worked on. This one is different because it is being worked on by the ECMAScript committee, and it seems like the strongest module proposal in that committee so far. If adopted, some future version of the ECMAScript (JavaScript) standard will have support for modules natively. So, any module API that used today should be aware of what is going on in the ECMAScript arena.

So, for talking about a module standard, particularly for use in the browser, these are the main contenders:

AMD

Wrappings

Simple Modules

There are other modules systems like the one used in Dojo, and the one used by YUI, but those are too tied to those existing toolkits to be broadly adopted. Dojo has started to move to AMD.

AMD and Wrappings Differences

AMD and Module Wrappings are mostly the same. There needs to be a function wrapping around the actual module, to make sure the dependencies are fetched first before executing the module. There is some bikeshedding around names that I do not think is important (define vs. module.declare).

The main difference: AMD executes the module functions for dependencies before executing the current module function, where Wrappings just makes sure the module function is available, but does not execute it until the first require("") call inside the module function asks for it.

For shorthand, I will call the AMD approach the "execution" approach, and the Wrappings approach "availability". Both relate to how dependencies are handled.

Because AMD executes the module functions of dependencies before calling the function wrapping of the current module, it can pass the dependency as an argument to the factory function. This is a side benefit of execution.

The "availability" approach was used for Wrappings to keep it most similar to how CommonJS modules deal with dependencies, but I believe it is the wrong choice for the future.

Problems with "availability" model

"Availability" is not supported in the ECMAScript Simple Modules strawman. Simple Modules uses a model that is closest to the "execution" model: you use "module" and "import" keywords to reference dependencies, but they cannot be used like require("") is used in the "availability" model. "module" and "import" are more like directives, and not something that can by dynamically assigned, as part of conditionals. Examples:

In the process of working on RequireJS and trying to translate modules written for Node to a wrapped format, I have seen the following constructs in Node modules (which assume the "availability" model):

Those work in the "availability" model, but not in Simple Modules. AMD's execution model does not allow the above constructs, so modules written for AMD will be more portable to Simple Modules.

2) The "execution" model fits better for projects that use libraries like jQuery, Prototype or MooTools, where many of the modules augment other objects, and they are assumed to have already run before executing the current module function. jQuery plugins augment the jQuery object, Prototype and MooTools augment JavaScript object prototypes.

Choose AMD

AMD is the API for today that translates to the future best, and it should be the one that jQuery should support because:

There is a document that defines the API.

It is understood by a few people, and discussed in public on the CommonJS list.

It is not ratified as a "Standard" in CommonJS because there are list participants that still want to hold on to CommonJS "availability" semantics. However, that availability model is not as future proof as AMD, at least for the Simple Modules future.

AMD translates better to Simple Modules due to the execution model. It maps better to the the related Module Loaders strawman, so if Simple Modules with the Module Loaders API becomes a standard and gets implemented in browsers, code written for AMD is more likely to continue to work.

For browsers versions that probably never support Simple Modules (IE6-8, even 9?), code continues to work.

The "execution" approach for dependencies fits better with the mental model of existing browser toolkits that want to augment other objects, like jQuery plugins.

AMD is supported by a module loader (RequireJS) that has had some level of successful adoption, and it is fairly robust now, with over a year in development with frequent releases. RequireJS has an adapter that allows the same modules to be run in Node and Rhino. The Node adapter allows using existing Node modules.

RequireJS in particular works hard to make sure it works well with jQuery, including integration with jQuery's readyWait to hold off DOM ready callbacks until all scripts are loaded.

AMD has more real world adoption than Wrappings. The Dojo Toolkit now supports using an AMD-compliant loader in their trunk code for Dojo Core and Dijit. There is a thriving RequireJS list with real people using it. BBC iPlayer uses it. The RequireJS implementation of the AMD API has been promoted as a useful tool in multiple jQuery-related conferences.

Stronger: The refactored core is more robust vs. the 0.1x releases, and the new plugin API makes it much easier to construct loader plugins. The plugin API is an implementation of Kris Zyp's plugin API proposal. The optimizer/build support is different than what he proposed, and I plan on discussing the changes on the CommonJS list where he sent the proposal. But the basic API for defining a plugin in that proposal is supported.

The Rhino and Node adapters are even better in this release. I want to improve the Node adapter in a future release by allowing the use of npm installed modules without setting up a package path config.

Faster: The refactored core responds faster to loaded modules, and it will execute a module as soon as its dependencies load. In practice you will probably not notice an appreciable speed change, but it allows the new plugin API to work well.

Prettier: The web site got a much needed facelift and a logo, thanks to the talented Andy Chung. So much better than my feeble attempt.

As part of the refactoring, I removed the Transport D files. I believe they are no longer in use by other projects. I also removed require.modify. It could not support relative path names, and was always an odd thing that did not quite fit in with the rest of the code.

The refactored code is also more strict. If you do this in main.js:

require(['foo'], function () {});

And then in foo.js, it does:

require(['bar'], function () {});

The require callback in main.js will be called before the require callback in foo.js, because foo.js did not define a module. The loader thinks it just needs execute foo.js in order to call the require callback in main.js. To make sure the require callback in main.js waits for foo's factory function to get called, change the require call in foo.js to define:

//In foo.js:define(['bar'], function () {});

Thanks to the community for filing bugs and contributing to the discussions on the mailing list. Thanks to Ben Hockey for drilling in deep and working out how to load the Dojo trunk code with RequireJS. It prompted some great bug fixes, and a few patches from Ben.

What's next? I want to move closer to a 1.0 for RequireJS, but I would like to get these things settled first:

Get feedback from the community on the loader plugin API. I need to do some better documentation for it too.

I want to do a has.js plugin generator, and I want to integrate support in the optimizer to trim if(has['test']){} if the configuration passed to the optimizer indicates that has['test'] will always be true.

Allow the optimizer to run on top of Node. Right now the optimizer runs on top of Java via Rhino, but that is increasingly becoming a liability.

Sort out full packages support. Right now RequireJS can load modules in packages, but it cannot handle two versions of the same package in a project. While this is a minority use case, I would like to have a story for it. I think it is workable in source form, but I am not sure how it will work out yet for optimized scripts that want to include modules from two different versions of the same package.

Sunday, November 14, 2010

The jQuery sample project now includes the ability to use RequireJS plugins.

The jsonp! plugin has been removed, since, thanks to work by Kris Zyp, the core loader now supports loading JSONP dependencies by default. The JSONP docs have been updated accordingly.

The optimizer can now be run from any directory, not just the directory with the build profile.

r.js Node adapter is more robust, and it can handle using more Node-written modules by default now. Thanks to Francois Laberge for a great test case application that lead to improving the robustness of r.js.

Initial support for PS3 Netfront browser. Thanks to Chris Warren for investigating the load behavior of the browser. Not all tests pass, but the basic ones do.

The next stage of development will likely be a refactoring of the basic code. The code has grown organically along with some changes in requirements over the course of the year, and it would be good to reorganize based on the current understanding of script loading needs. The biggest noticeable change should be a better API for writing loader plugins, but the core API for requiring and defining modules should not change.

Friday, November 12, 2010

I am a front end developer on Mozilla F1, a sharing extension in Firefox, and this is some technical background on it. Note that any opinions in here are from my personal perspective, you should not treat this as official Mozilla policy, etc...

F1 consists of a few components:

1) Firefox browser extension2) HTML/JS/CSS share UI served from from the F1 web server3) API server written in Python that lives on the F1 server

The extension (#1) is responsible for the share button in the toolbar, injecting navigator.mozilla.labs.share(), and for managing a browser instance that is inserted above the main web browser area. This browser area makes a call to the F1 server to server up the main share UI (#2).

The share UI is written in pure HTML/CSS/JS. They are static files that use RequireJS to build modular JavaScript which is then optimized via the RequireJS optimizer into a single application-level file. It does not yet use appcache, but it is something I want to add. It should be straightforward. Making sure we properly version files with long cache times will also help.

The share UI uses the API server (#3) to do the actual sharing. The API server also manages the complications of the OAuth dance.

If you are curious about the implementation, all the code is open. You can download it/fork it at the Mozilla Labs F1 GitHub repository. There is a README.md file with setup instructions.

The great thing about this? You can run your own share server that you control. If you want the F1 browser extension to use your server for the share UI and API server, set the following F1 configuration in the about:config Firefox preferences:

Ideally that would be an https URL, but if you are setting up your own server, hopefully you know the risks involved. Be sure to restart the browser so the extension will use this new config value.

If you think this is really cool and want to help contribute code/ideas/submit pull requests, be aware that we are still in the early stages of F1. We expect to make many changes as we refine it. Mozilla is also new to GitHub, so we appreciate your patience as we try out different work flows.

I will close out this post with a couple developer bikeshed questions and answers. Again, these are my personal perspectives, and other people on the team may have different ones.

Why is the share UI served from a web server and not from the extension?

This is an experiment. More things are moving to the cloud/server, and some of the auth APIs, like OAuth, are better suited to server use. While Mozilla already runs some non-trivial server-based services, it is good to get experience with varying types of server-based services.

I like the flexibility and ease of updates a server solution provides. Also, it is easier to debug web code that is served from content space vs. browser chrome space.

It may allow us to support mobile and even other browsers easier. I played around with doing a Google Chrome extension that served up the share UI. Unfortunately, the Google Chrome extension model is not as flexible as what we can do in Firefox, so it does not really work.

In particular, I was looking at a Chrome extension Browser Action that served up the UI in an iframe in a Popup, but I could not get the popup wide enough to show the UI, and it closes as soon as an OAuth window jump occurs. We would have to do some rethinking on UI to support that, and it is not clear how beneficial that is at this early development stage.

All that said, the UI could be burned in to the extension at some point. It is still too early to tell where this will go.

Why did you use Python for the server?

Ease of setup, combined with some knowledge of real services built with it, and the mix of developer skills of the F1 team.

Ideally I would like to see the server running JavaScript via Node, but that is because I am an irrational JavaScript zealot. However, the JS libraries for Node are still young, and at the end of the day we want to solve real user problems, not debug other people's code.

That could change in the future. But for now, it is more important to get something up quickly that works well so we can test our designer's usability theories for solving real user problems.

As the latest browser renderers, WebKit, IE9 and Firefox 4, implement HTML5 there is wording in the HTML5 spec that breaks ordered execution of dynamic script elements.

What does this mean in practice? The RequireJS order! plugin and a core feature of LABjs breaks in the latest browsers under development. More info:

When a dynamically created script element is created and appended to the DOM (via document.createElement('script'), the behavior in the current browsers differs: IE and WebKit will execute the script as soon as it is delivered from the network, where Opera and Firefox will download the script as fast as it can, but execute the scripts as they are ordered/appended to the DOM.

The Firefox/Opera behavior is desirable for making existing scripts on the web go fast since all the scripts can be downloaded in parallel, but still execute in the order you specify -- there are many scripts today that assume their implicit dependencies have already been executed before the script executes.

jQuery plugins that assume jQuery is already available in the page are a good example.

Kyle Simpson of the LABjs project figured out a nice hack for IE and Webkit to get them to execute scripts in order. He gets IE and WebKit to download the scripts first without executing them (via a non-script script type, like type="text/cache"), then adding the real scripts to the DOM in order normally means they execute in order.

However, this technique has edge cases where it can fail, in particular with poor cache headers, and the latest version of the HTML5 spec effectively disallows the hack. The code to get ordered execution right now also has some browser sniffing to get it to work, which is clearly not ideal.

The ideal case is to have a capability that can be sniffed in the browser, and something that would allow for ordered execution of dynamically added script elements without the need of browser hacks.

Kyle has been working with HTML5 spec folks to try to work out something along those lines.

However, the HTML5 spec participants are not sure how important of a use case this is to support to warrant a spec change. I do think it is important for existing scripts to get some performance benefit.

While RequireJS does not rely on this behavior for its core operation (the define() function wrapper ensures that it is not an issue, and assumes out of order script execution), the order! plugin in RequireJS does depend on this behavior.

So, if you are a RequireJS order! plugin user and you really depend on it, or if you are a LABjs user, please take a moment to let the HTML5 group know the sites you work on that would be affected if those tools no longer worked in the browsers being developed today.

Kyle Simpson has set up a wiki page to describe the problem. There is a Discussion section of the page where you can voice your feedback. Click the + in the tabs at the top to add a section where you can list how this issue would affect your sites. Be sure to leave your name in case the group would need to get more information from you.

Please take a moment if you do depend on this feature to let the HTML5 group know.

Sunday, October 17, 2010

Fix bug where scripts were not loaded from the correct path. Did not affect RequireJS+jQuery builds, but affected other builds. If you do not use a RequireJS+jQuery build, then it is strongly recommended that you upgrade from 0.14.4 to 0.14.5.

Added an urlArgs config option to allow for cache busting when servers/browser misbehave during development.

I apologize for the quick series of releases this week. I should be slowing them down a bit now.

Saturday, October 16, 2010

The bundled RequireJS+jQuery file on the download page has jQuery 1.4.3 in it.

Due to a change in jQuery, there are almost no patches to jQuery in the RequireJS+jQuery file, just a convenience patch to register it as a module.

If your scripts properly use define() with RequireJS, then it is possible to load jQuery from the Google CDN. This can save you some bandwidth costs!

There are a few caveats with using RequireJS to load jQuery from the CDN:

Most jQuery plugins assume jQuery is already loaded when they execute. However, with RequireJS, jQuery could still be loading when the plugin file executes. If you plan to use jQuery plugins in your project and you really want to load jQuery from the CDN, then in your own RequireJS-based project, you should wrap each of the plugins in a define(function(){ /*plugin goes here */ }); wrapper. Or, just stick with using the combined RequireJS+jQuery file served from your server.

If you use code with define() calls in them, be sure to do the minification and combining of scripts using the RequireJS optimization tool. It will make sure the define() calls get proper scripts names to allow all the scripts to be combined together. If you use another tool to just concatenate your define()'d scripts together, it will result in errors if the define()'d modules are not named. A copy of the optimization tool is included in the jQuery+RequireJS Sample Project.

This is only recommended if you have one version of jQuery loaded in the page.

An example showing how to configure RequireJS to load jQuery from the CDN:

Thursday, October 14, 2010

Support for define(). It works the same as require.def(). It is supported in order to conform with the Asynchronous Module Proposal. require.def will continue to work, but you are encouraged to gradually migrate to define() for better compatibly with other Async Module loaders.

GPL license option removed: project is now just MIT and new BSD dual-licensed, since the new BSD license is compatible with the GPL.

The big motivation was the require.def() -> define() API change. I do not expect any more top-level changes to the API now, particularly when related to the Asynchronous Module proposal. There may be some API changes around how plugins are written at some point, but those are tricky to write today, and could use an API cleanup. However, using plugins in your application code will likely stay the same.

To be clear: require.def() will continue to be supported. If you only care about loading your modules via RequireJS, you can continue to use it. However, if you think you might want to allow your code to work in other script loaders that follow the Asynchronous Module API, then you should look at switching to define().

Relying on a server to generate client-friendly code is not a universal solution, and it is a particularly poor solution for doing mobile web development, where apps can be created with HTML/CSS/JS but run from the mobile device without a server.

RequireJS implements the Async Module proposal. There are some rough spots around the Packages proposal, particularly around mappings, but it is something that can be used, and I'm trying to support some version of packages with RequireJS.

The ECMAScript folks are looking at a Simple Modules spec, but it does not solve any of the problems for me. The problems have been addressed by RequireJS. It would be nice to not to have to include the code for RequireJS in an app, but it works, it is not crazy big, and it can always be pushed down to native environment support after it is broadly in use.

fn with no globals

What I would like to see instead of a Simple Modules is "fn". It is like "function" but it does not have access to the global space. You would use it just like "function" but any variables missing "var" would not be in the global space (could throw an error). If it needs a "use fn" or something like that, OK, but I am not familiar with when "use" strings are needed. Example:

fn hello() { //the next line would throw message = 'hello';

//the next line would be an error too var name = window.name;

return message;}

That would give me one of the big things out of Simple Modules that I cannot already have via RequireJS. It would also shorten the traditionally cumbersome "function" word and encourage good coding practice since it does not have access to the global scope. For situations that need global access, fall back to good old "function".

I would not be surprised if this has already been suggested, but as mentioned in my sketches preamble, just jotting down things as they come to me without doing due diligence.

David Ascher and I had a discussion last week about server side JavaScript. He mentioned that while server side JavaScript would give some small incremental advantage, there would still be a split between what server devs needs and do and what browser devs need and do. Other people have made a similar observation.

Here is one sketch on what server side JavaScript could bring.

Server-side JavaScript

There are two ways to treat the browser and server workload split . One treats the browser as a dumb client, the other as a smart client.

1) Dumb client

Blogs, news sites, wikis. Content Management Systems (CMS). Serve HTML strings to a browser. There is likely some browser interactivity, but the sites can get by without it via progressive enhancement. The server is responsible for generating the HTML from data mashed together with templates.

This use case is well understood problem space. It is not always executed well, but it has been done many times. Having server side JavaScript will help with some sharing of code/design approaches with the browser, but that is about it. Still, for me it would be nice to have SSJS solutions for these cases.

2) Smart client

Servers as APIs/data stores. The server is doing straight-out business logic, data manipulation. The data sent to the browser is just in JSON, preferably not XML.

Server-side JavaScript is useful here given the expressiveness of the language: closures/anonymous functions for callbacks make using async easier to use, and Node's focus on an event loop is a great fit here, a more natural experience for a browser developer to help out with this area.

Using Servers as Web Workers

This an extension of #2. First, go read up on Web Workers. Short review:

var worker = new WebWorker('http://example.com/some/thing');

//Get messages from the workerworker.onmessage = onMessageFunction;

//Get errors from the workerworker.onerror = onErrorFunction;

//Post messages to the worker, only//JSON-compliant messages back and forthworker.postMessage({"msg": "hello"});

What would be ideal is that instead of running the web worker in the browser, it would do its work on the server. To bootstrap, a small JS shim could be used that runs in the browser that does the server communication to run the code on the server, and then a Web Worker environment/toolkit/library on the server needs to be created. This node-worker project might be a good place to start for the server code.

The important point: an app developer codes the logic on the server to the Web Worker environment, using postMessage to send out responses, onmessage and onerror to receive responses.

The neat thing about this approach: it can be used for simple request/response actions, or for longer term, comet-style long-lived messaging. It also could allow for actually running or mocking the server endpoint in the browser.

There needs to be some way to manage state. It would be nice to "pause" and "resume" a server-based web worker. Maybe that is just cookies, but it has to be secure to things like CSRF, and still give the user the ability to clear that state like they can clear cookies today.

The other nice thing about this model, it fits in with the event loop that JS developers use today, and it is a tightly constrained environment. No messing about with "requests" and "responses" in the traditional server sense.

There are probably concerns about what it does to a REST approach to API development. I am hoping that it just transforms the REST calls into events routed to web workers, not sure how that will shake out yet.

I have a few JavaScript-related things I want to talk about. I prefer implementing something or give a proposal more consideration before talking about it. However, I find that I end up not talking about a lot of things that bounce around in my head. So I will start talking more about them, even though they are not fully formed. This blog post is to set up the context and disclaimers for those posts, so I can link back to them.

Disclaimers

I will make a bunch of declarations that I will not back up as much as they deserve. I may expand on them in later posts, but most likely I will not. The point of these sketches is to get the basic thought out instead of keeping it internal.

If I cast a negative light on your favorite project, keep in mind this is my blog, so it is naturally biased, and there always needs to be hedge bets. Until there are implementations with real world use, the future is always malleable, and often big enough to accommodate a few different views. Keep working on what you are passionate about.

I will be ruthless and what may seem like unfair in my comment management/deletion policy. You are free to say your own piece on your own blog.

Who am I?

I have been working with JavaScript since at least 1998. It was earlier than that, but that was the start of a large scale project that got real users. Since then I have used it on and off. I still need to learn more about the inner dark places of the language and its implementations, but I have used it to build some larger front-ends, particularly while at AOL: a picture service UI, a cross-browser plugin for streaming radio, a chat service UI, and helped out with a webmail service UI. I contribute to Dojo, make RequireJS, and work at Mozilla on web-based messaging services. I only want to develop for the web platform, using JavaScript. I did not study computer science, but physics, so I lack some of the CS technical underpinnings.

These issues affect use cases with traditional CommonJS modules/converted modules or with CommonJS packages. If you do not deal with those use cases, there is less urgency to upgrade.

I recently started using the RequireJS code in Node beyond simple tests, working on a Node-based package tool. Once I get it up an running, I'll post more on this blog. In the meantime, there is a design sketch, and some code, but it is very rough at the moment, mostly scaffolding.

Monday, September 27, 2010

I just pushed a small update to fix three issues that dealt mostly with the new shortened anonymous module syntax used to wrap traditional CommonJS modules, and the converter tool for adding in the anonymous wrapper.

If you were using the regular RequireJS module format, with the dependencies specified outside the definition function, then you likely do not need to upgrade right away.

In fact, you may want to wait a couple days to see if there are any other updates. Given the newness of the anonymous module code and more people trying it with traditional CommonJS modules, I want to push out quicker releases to give those new users the best experience. I will be sure to mention if the update is recommended for all users.

There is one fix in this release for a type of deeply cyclic/circular dependency issue, but I believe it to be a rare issue for most current users. If 0.14.0 is working for you, then no need to try the latest version right away.

The async module format/async require that is now supported by RequireJS really feels like it is the best of both worlds: something that is close enough to traditional CommonJS modules to allow those environments to support the format, while still having something that performs well and is easy to debug in the browser. I really hope the format can be natively supported in existing CommonJS engines. Until then, RequireJS works in Rhino and has an adapter for Node.

Working with CommonJS packages has a few manual steps: finding the package, downloading it, configuring its location. I want to work on a command line package tool that makes this easy. Hopefully it will be able to talk to a server-side package registry too, to allow simpler package lookups by name (something like npm, but something that can house modules in a format used by RequireJS). Kris Zyp has already done some work in this area, and I hope to just use it outright for RequireJS, or leverage the code.

Once that lands, then it feels like it will be time for a RequireJS 1.0 release. The code has been very usable for a few releases now, but I have kept the release numbers below 1.0 to indicate that the final mix of features were being worked out. With the changes in this release, it feels like the major format changes have landed. For those of you who have used previous RequireJS releases, your code should still work fine, and it should work as-is in future releases too.

Monday, September 20, 2010

Thanks to the clever research and design feedback from Kris Zyp, I just finished committing some preliminary support for anonymous modules in RequireJS.

What are anonymous modules?

They are modules that do not declare their name as part of their definition. So instead of defining a module like so in RequireJS:

require.def('foo', ['bar'], function(bar) {});

You can now do this:

require.def(['bar'], function (bar) {});

When using RequireJS in the browser, the name of the module will be inferred by the script tag that loads it. For Rhino/Node, the module name is known at the time of the require.def call by the require() code, so those environments have an easier way to associate the module definition with the name.

Why is this important?

This allows your modules to be more portable -- if you change the directory structure of where a module is, there are fewer things that you need to change. You still may want to check module dependencies, but RequireJS now fully supports of relative module names, like "./bar" and "../bar", so by using those, it can help make your module more portable.

Requiring a module name in the module definition was also a notable objection that the CommonJS group had to the module format that RequireJS supports natively. By removing this objection, it gets easier to talk about unifying module formats across the groups.

To that end, there has been talk in the CommonJS group of an Asynchronous Module definition, something that allows the modules to work well in the browser without needing any server or client transforms. Some of the participants do not like the module format mentioned above, and prefer something that looks more like the existing CommonJS modules.

and use Function.prototype.toString() to pull out the require calls and be sure to load them before executing the module's definition function. After doing some research, it seems like this approach could work for modules in development, and optimizations could be done for deployment that would add the module name and the dependencies outside the function.

So I also put support for the above syntax into RequireJS, in an attempt to get to an async module proposal in CommonJS that works for the the people that like the old, browser-unfriendly syntax and for the people like me that prefer a browser-friendly format I can code in source.

We still need to hash out the proposal more, but I am hopeful we can find a good middle ground. I also hope the above syntax makes it easier to support setting the module export value via "return" instead of having to use "module.exports =" or "module.setExports()".

I still plan to support the syntax that RequireJS has supported in the past -- any of this new syntax will hopefully be additive.

What is the fine print?

Only one anonymous module can be in a file. This should not be a problem, since you are encouraged to only put one module in a file for your source code.

The RequireJS optimization tool can group modules together into an optimized file, and it has the smarts to also inject the module name at that time, so you get less typing and a more robust module source form, but still get the optimization benefits for deployment.

In addition to adding the module name, the RequireJS optimization tool will also pull out the dependencies that are specified using the CommonJS Asynchronous Module proposal mentioned above, and add those to the require.def call to make that form more efficient.

When will it be available?

Right now the code is in the master branch. Feel free to pull it and try it. There may be some loose ends to clean up, but there are unit tests for it, and the old unit tests pass.

This code will likely be part of a 0.14 release. I want to get in loading modules from CommonJS-formatted packages before I do the 0.14 release, so it still is probably a few weeks away. But please feel free to try out the latest code in master to get an early preview.

Again, many thanks to Kris Zyp for seeing patterns I overlooked, doing some great IE research, and for pushing for these changes.

[Sidenote: I'm going to use JavaScript instead of ECMAScript in this post -- JavaScript and I go way back, before it got its colonial, skin disease-inspired name.]

Simple Modules is a strawman proposal at the moment, it is still a work in progress. Some of the more interesting parts for me, the dynamic loading, are still very rough and in a separate proposal. It sounded like Dave wants to focus on prototyping the lexical scoping and static loading bits first before proceeding further on the dynamic bits. Great idea on actual prototyping, sounds like they will leverage Narcissus for doing the prototyping.

So some of my feedback may be a bit premature, but some of it gets to why there are modules and what should be allowed as a module, so hopefully that might be useful even at this early stage.

First, some perspective on where my feedback comes from: I am a front-end developer, I do web apps in the browser. I love JavaScript, I want to use it everywhere, and I believe that it is the only language that has the potential to be used effectively anywhere.

However, that is only because JavaScript is available and works well in the browser. Any new solutions for modules should *work well* in the browser to be considered a solution. The browser environment should be treated as a first class citizen, keeping in mind the browser performance implications on any approach. This is one of my main criticisms of CommonJS modules, and the reason I write RequireJS, a module loader that works well in the browser. I also maintain Dojo's module loader and build system.

Why

Why have modules? What are they? Modules are smaller units of code that help build up larger code structures. They make programming in the large easier. They usually have specific scope, and avoid dumping properties into the global scope. Otherwise the likelihood of a name collision between two modules is very high and errors occur. So a module system has syntax to avoid polluting the global space.

There also needs to be a way for modules to reference other modules.

Simple Modules

The Simple Modules proposal outlines a Module {} block to define what looks like a JavaScript object as far as inspection (for .. in notation, dot property referencing), but is something more nuanced underneath.

Anything inside the Module {} block is not allowed to use a global object, and you cannot add/change a Module after its definition. Here is a sample module, called M, that demonstrates some of the syntax and scoped variable implications:

module M { //In normal code this would define a global, //but not inside the module declaration. This //is likely to be an error(?) in Simple Modules. foo = "bar";

//color is only visible within module M's block var color = "blue";

//Creates a publicly visible property called //"name" on the module. export name = "Module M";

To the extent that modules help programming in the large, the simple modules approach do not give me anything more than I have now, and the proposal has some specific weaknesses. I can appreciate there are some juicy things in the proposal for JavaScript engine developers, but as a user of modules, I do not see it as a net advantage. Here is why:

Lexical scoping/Global access

In addition to using the function-based module pattern to avoid leaking globals, I use JSLint to avoid accessing globals and the use of eval/with. For programming in the large, JSLint helps even more because it enforces a code style that produces much more uniform code. It is built into many editors and easy to run as part of build processes.

JSLint is not perfect (I would like to see JavaScript 1.7/1.8 idioms supported, like let and for each), and you may not like some of the style choices. However, reproducible, consistent style that can be checked automatically is more important than bikeshed-based style choices. It warns of global usage, eval and with, and even helps you find unused local variables.

What is even nicer is that you can opt out of some of the JSLint choices, you can use some globals if you need to. There is some flexibility in the choices.

Functions as Modules

For #3, better syntax for module definitions, I do not see it as a net win over the function(){} module pattern, particularly how it is used for modules in RequireJS where it encourages not defining global objects.

The Simple Modules syntax does not allow exporting a function as the module definition. This is a big wart to me. Functions are first class entities in JavaScript, one of its strongest features. It is really ugly to me that I have to create a property on a module object to export a constructor function, or some module that lends itself to being a function:

Again, ugly. It should be possible to set the module value to be a function. I know this makes some circular dependency cases harder to deal with, but as I outlined in the CommonJS trade-offs post, it is possible to still have circular dependencies. Even in CommonJS environments now, it is seen as useful. Node supports setting the exported value to a function via module.exports, and there is a more general CommonJS proposal for a module.setExports.

It means the developer that codes a circular dependency case needs to take some care, but it works. Coding a circular dependencies is a much rarer event than wanting to use a module that exports just a constructor function or function. The majority use case should not be punished to make a minority use case a little easier, particularly since you can still trigger errors in the minority use case. Coding a circular dependency will always require special care.

This particular point makes it hard for me to get on board with Simple Modules even in its basic lexical scoping/static loading form. I strongly urge any module proposal to make sure functions can be treated as the exported value. We have that capability today with existing module implementations, and it fits with JavaScript and the importance it places on functions.

Given the extra typing that would be needed to access functions that are exported as modules, I do not see the Simple Modules syntax a net win over the function-based module pattern, particularly as used in RequireJS.

Beyond Lexical Scoping

For programming in the large, what is really needed are more capabilities than what has been outlined so far for Simple Modules. However, making modules useful needs attention in these areas. This is the "99 problems" part:

Dynamic loading

Dynamic loading is harder to work out than static loading. If there is dynamic loading, it is unclear I would need static loading . The goal of modules is to allow programming in the large, and even for a smaller project, why do I need to learn two ways to load modules (static vs. dynamic), when one (dynamic) will do? Dynamic loading is also necessary to enable all the performance options we have today to load scripts in the browser.

There is a module loader strawman proposal that would tie into Simple Modules, but I understand it will not be nailed down more until the basic Simple Modules with static loading is worked out/prototyped.

Referring to other modules

It is unclear how a Module Resource Locator (MRL) is translated to a path to find a module. In CommonJS/RequireJS, an MRL looks like "some/module", and that MRL is used in require() calls to refer to other modules. require("some/module") translates the MRL string "some/module" to some path, "a/directory/that/has/some/module.js". That path is used to find and load the referenced module.

Looking at the Simple Modules examples, it looks like just plain URLs are used as the MRL, and those do not scale well for programming in the large. You will want to use a symbolic name for the MRL, and allow some environment config to map those symbolic names to paths. Otherwise it places too many constraints on how the code is stored. It may not even be a disk -- apparently CouchDB uses design docs to store modules.

I have seen some comments about using more symbolic names for MRLs in some of the notes around the proposals, so maybe it is planned.

In RequireJS, the symbolic name is also used in the module definition. However, since symbolic names can be mapped, they do not have to be the reverse DNS symbolic names, like "org/mozilla/foo". In fact it is encouraged to not use long names.

Distributing and sharing modules/module groups (packages)

This issue can be treated separately from a module spec, but it could affect how MRLs are mapped via a module loader. And this issue really is important for programming in the large. The solution may just be "use packages as outlined by CommonJS". While there are still some gray areas in the package-related specs for CommonJS, that could be a fine answer to the problem.

Performance in the browser

This is getting even further away from the basic Simple Modules spec, but a solution to this issue should be considered for any module solution. The browser needs to be able to deliver many modules at once to the browser in an efficient way. I have heard that Alexander Limi's Resource Packages proposal may be a way to solve this that may work with the Simple Modules approach.

A common loading pattern for web apps will be to load some base scripts from a Content Delivery Network (CDN), then have some domain-specific scripts to load. As long as this still works well with the bundling solution that is great. We already have tools today to help bundling, minifying and gzipping scripts. Any solution will have to be better than what we can do today. Resource Packages could be since it allows other things like images to be effectively bundled.

Summary

I do not feel like Simple Modules are an improvement over what can be done today. In particular, I feel RequireJS when used alongside JSLint is a compelling existing solution, and it works well, and fast, in the browser.

For the more immediate goals of Simple Modules:

the expanded, stricter lexical scoping is nice, but for a web developer, it is a slight incremental benefit if JSLint is already in use.

Not being able to set a function as the module value means the syntax is not a net win over the function-based module pattern.

The larger issues of module addressing and bundling/distribution are understandably hazy in this early stage of the strawman proposals, but they will need to be addressed as well as or better than existing solutions to gain traction.

I do not want to contribute stop energy around the proposals, I am just hoping to provide feedback to indicate what problems need to be solved better from my web developer viewpoint. I appreciate I could be wrong on some things too. I may be missing something grander or larger, but hopefully if that is the case, this feedback can indicate how to explain the proposals better.

Sunday, July 04, 2010

A new plugin: order -- it ensures that scripts are fetched asynchronously and in parallel, but executed in the order specified in the call to require(). Ideal for traditional browser scripts that do not participate in modules defined via calls to require.def().

Wednesday, May 19, 2010

I have a clone of the Jetpack SDK that has support for the require() and require.def() syntax supported by RequireJS.

Right now the syntax support is very basic. It does not support these features of RequireJS:

configuring require() by passing in a config object to it

plugins

require.modify

require.nameToUrl

require.ready (does not make sense)

But you can do the main things, like:

require(["dependency"], function (dependency){}());

and define modules via:

require.def("moduleName", ["dependency"], function (dependency){}());

It should also support CommonJS modules that were converted to RequireJS syntax via the conversion tool in RequireJS, but I have not tested it extensively.

The changes are just in one file in the sdk, securable-module.js. So you could just grab that file if you wanted to play with it. There is a sample app in the source if you want to see it in action. Also viewing the changeset shows the diff on the securable-module.js file as well as the example app source.

Why do this? Because sharing code between the browser and other environments is hard with the regular CommonJS syntax. It does not work well in the browser. The browser-based CommonJS loaders that use eval() have a worse debugging experience. Starting with the RequireJS syntax makes it easy to transfer the modules for use in the web browser, and the RequireJS code works in Node and Rhino.

I would like to add support for RequireJS plugins in Jetpack. I can see the i18n plugin and text file plugin being useful for Jetpacks. That will likely take more work though. I want to see if the basic syntax support is useful first.

I ended up not using that much RequireJS code, just some argument conversions and supporting "setting the exported value". It relies on the existing Jetpack code for paths and package support.

The priority config option is the parallel download support I mentioned in the "A require() for jQuery" post. I now believe RequireJS meets all the requirements outlined in that post.

Some icing on the cake I want to pursue: a server-based service that can create optimization layers on the fly. I have all the pieces in place in the optimization tool to allow this, and I previously built a server build option for Dojo. With that, you could conceivably use the priority config support with a server that did the optimization layers on the fly:

Or something like that. The fun part -- this server endpoint would use server-side JavaScript, since the optimization tool in RequireJS is built in JavaScript. I could use Node or something Rhino-based. It is likely to be Rhino-based since that allows the minifier, Closure Compiler, to work on the fly, since Closure Compiler is written in Java.

That server-based service will likely take a more design work and thought, but if you feel it is something necessary for your project, please let me know. Better yet, if you want to contribute to the project in this area, leave a note on the mailing list.

Thursday, April 29, 2010

In the conference wrap-up, John Resig mentioned some requirements he has for a jQuery script loader:

1) script loading must be async

2) script loading should do as much in parallel as possible. This means in particular, that it should be possible to avoid dynamic nested dependency loading.

3) it looks like a script wrapper is needed to allow #1 and #2 to work effectively, particularly for cross-domain loading. It is unfortunate, but a necessity for script loading in browsers.

I believe these requirements mesh very well with RequireJS. I will talk about how they mesh, and some other things that should be considered for any require() that might become part of jQuery.

Async Loading

As explained in the RequireJS Why page, I believe the best-performing, native browser option for async loading is dynamically created script tags. RequireJS only uses this type of script loading, no XHR.

The text plugin uses XHR in dev mode, but the optimization tool inlines the text content to avoid XHR for deployment. Also, the plugin capability in RequireJS is optional, it is possible to build RequireJS without it. That is what I do for the integrated jQuery+RequireJS build.

Parallel Loading

John mentioned that dynamic nested dependency resolution was slower and potentially a hazard for end users. Slow, because it means you need to fetch the module, wait for it to be received, then fetch its dependencies. So the module gets loaded serially relative to its dependencies. Potentially hazardous because a user may not know the loading pattern.

The optimization tool in RequireJS avoids the parallel loading for nested dependencies, by just inlining the modules together. The optimization tool can also build files into "layers" that could be loaded in parallel.

For each build layer, there is an exclude option, in which you can list a module or modules you want to exclude. exclude will also exclude their nested dependencies from the build layer.

There is an excludeShallow option if you just want specific modules to exclude, but still want their nested dependencies included in the build layer. This is a great option for making your development process fast: just excludeShallow the current module you are debugging/developing.

While dynamically loading nested dependencies can be slower than a full parallel load, what is needed is listing dependencies individually for each module. There needs to be a way to know what an individual file needs to function if the file is to be portable in any fashion. So the question is how to specify those dependencies for a given file/module.

There are schemes that list the dependencies as a separate companion file with the module, and schemes that list the dependencies in the module file. Using a separate file means the module is less portable -- more "things" need to follow the module, so it makes copy/pasting, just distributing one module more onerous.

So I prefer listing the dependencies in the file. Should the dependencies be listed in a comment or as some sort of script structure?

Comments can be nice since they can be stripped from the built/optimized layer. However, it means modules essentially need to communicate with each other through the global variable space. This ultimately does not scale -- at some point you will want to load two different versions of a module, or two modules that want to use the same global name, and you will be stuck. For that reason, I favor the way RequireJS does it:

With this model, dependency1 does not need to be global, and it allows a very terse way to reference the module. It also minifies nicely. By using string names to reference the modules and using a return value from the function, it is then possible to load two versions of module in a page. See the Multiversion Support in RequireJS for more info, and the unit tests for a working example.

This model also frees the jQuery object from namespace collisions by allowing a terse way to reference modules without needing them to hang off of the jQuery object. There are many utility functions that do not need to be on the jQuery object to be useful, and today the jQuery object itself is starting to become a global of sorts that can have name collisions.

Script Wrapper

Because async script tags are used to load modules, each script needs to be wrapped in a function wrapper, to prevent its execution before its dependencies are ready. CommonJS recognizes this concern (one of the reasons for their Transport proposals) and so does YUI3. xdomain builds for Dojo also use a script wrapper.

While it is unfortunate -- many people are not used to it -- it ends up being an advantage. Functions are JavaScript's natural module construct, and it encourages well scoped code that does not mess with the global space. For RequireJS, that wrapper is called require.def, as shown above.

Here are some other things that should be considered for a require implementation:

require as a global

I believe it makes more sense to keep require as a global, not something that is a function hanging off of the jQuery object. require can be used to load jQuery itself, and as mentioned above, it would be possible to load more than one version of jQuery if it was constructed like this.

In addition, RequireJS was constructed with many of the same design goals as CommonJS: allow modules to be enclosed/do not pollute the global space, use the "path/to/module" module identifiers, have the ability to support the module and exports variables used in CommonJS.

Browsers need more than a require API

They also need an optimization/build tool that can combine modules together. RequireJS has such a system today. It is server-independent, a command line tool. It builds up the layers as static files which can be served from anywhere.

I am more than happy to look at a runtime system that uses the optimization tool on the server. RequireJS works in Node and in Rhino. The optimization tool is written in JavaScript and uses require.js itself to build the optimization layers.

I can see using either Node or Rhino to build a run-time server tool to allow combo-loading on the fly. Using Rhino via the Java VM has an advantage because Closure Compiler or YUI Compressor could be used to minify the response, but I am open to some other minification scheme that is implemented in plain JavaScript.

Loader plugins

I have found the text plugin for RequireJS to be very useful -- it allows you to reference HTML templates on disk and edit HTML in an HTML editor vs. dealing with HTML in a string. The optimization tool is smart enough to inline that HTML during a build, so the extra network cost goes away for deployment.

In addition, Sean Vaughan and I have been talking about support for JSONP-based services and scripts that need extra setup besides just being ready on the script onload event. I can see those as easy plugins to add that open up loading Google Ajax API services on the fly.

For these reasons I have found loader plugins to be useful. They are not needed in the basic case, but they can make overall dependency management better.

script.onload

Right now RequireJS has support for knowing when a script is loaded by waiting for the script.onload event. This could be avoided by mandating that anything loaded via require() register via require.def to indicate when it is loaded.

However, by using script.onload it allows some existing scripts to be loaded without modification today, to give people time to migrate to the require.def pattern. I am open to doing a build without the script.onload support, however the amount of minified file savings will not be that great.

Explicit .js suffix

RequireJS allows two different types of strings for dependencies. Here is an example:

require(["some/module", "http://some.site.com/path/to/script.js"]);

"some/module" is transformed to "some/base/path/some/module.js", while the other one is used as-is.

The transform rules for a dependency name are as follows: if the name contains a colon before a front slash (has a protocol), starts with a front slash, or ends in .js, do not transform the name. Otherwise, transform the name to "some/base/path/some/module.js".

I believe that gives a decent compromise to short, remappable module names (by changing the baseUrl or setting a specific path via a require config call) to loading scripts that do not participate in the require.def call pattern. There is also a regexp property on require that can be changed to allow more exceptions to the rules.

However, if this was found insufficient, I am open to other rules or a different way to list dependencies. The "some/module" format was chosen to be compatible with CommonJS module names, but probably some algorithm or approach could be used to satisfy both desires.

File Size/Implementation

Right now the stock RequireJS is around 3.7KB minified and gzipped. However, there are build options that get the size down to 2.6KB minified and gzipped by removing some features:

plugin support

require.modify

multiversion support (the "context" switching in RequireJS)

DOM Ready support

I am open to getting that file size smaller based on the feature set that needs to be supported.

3 layer loading

John mentioned a typical loading scenario that might involve three sections:

1) loading core libraries from a CDN (like jQuery and maybe a require implementation)2) loading a layer of your common app scripts3) loading a page-specific layer

{ modules: [ { //inside app/common.js there is a require call that //loads all the common modules. name: "app/common", exclude: ["jquery"] }, { //app/page1 references jquery and app/common as a dependencies, //as well as page-specific modules name: "app/page1",

//jquery, app/common and all their dependencies will be excluded exclude: ["jquery", "app/common"] }, ... other pages go here following same pattern ... ]}

This would result in app/common and app/page1 being loaded async in parallel. If require.js was a separate file from jquery.js, the following HTML could be used to load jQuery, app/common and app/page1 async and in parallel (the optimization instructions stay the same):

However, it is not quite flexible enough -- typically modules that are part of app/page1 will not want to refer to the complete "app/common" as the only dependency, but specify finer-grained dependencies, like "app/common/helper". So the above could result in a request for "app/commom/helper" from the "app/page1" script, depending on how fast "app/common" is loaded.

Notice the new "layers" config option, and now the required modules for the page is just "app/page1". The "layers" config option would tell RequireJS to load all of those layers first, and find out what is in them before trying to fetch any other dependencies.

This would give the most flexibility in coding individual modules, but give a very clear optimization path to getting a configurable number of script layers to load async and in parallel. I will be working on this feature for RequireJS for the next release.

Summary

Hopefully I have demonstrated how RequireJS could be the require implementation for jQuery. I am very open to doing code changes to support jQuery's desires, and even if jQuery or John feel like they want to write their own implementation, hopefully we can at least agree on the same API, and maybe even still use the optimization tool in RequireJS. I am happy to help with an alternative implementation too.

I know John and the jQuery team are busy, focusing mostly on mobile and templating concerns, but hopefully they can take the above into consideration when they get to script loading.

In the meantime, I will work on the layers config option support, improving RequireJS, and keeping my jQuery fork up to date with the changes. You can try out RequireJS+jQuery today if you want to give it a spin yourself.

Friday, April 23, 2010

The big feature in this release is integration with Node. Now you can use a the same module format for both browser and server side modules. The RequireJS-Node adapter translates existing CommonJS modules on the fly, as they are loaded by the adapter, so you can continue to use server modules written in the CommonJS format for your Node projects.

The RequireJS-Node adapter is freshly baked, so there could be some rough edges with it, but it is exciting to see it work. See the docs for all the details.

0.10.0 also includes support for an excludeShallow option in the optimization tool. This will allow you to do an optimization build during development, but still excludeShallow the specific module you want to develop/debug in the browser. So you can get great debug support in the browser for just that one module, but still load the rest of your JS super-fast. No need for special server transforms.

Tuesday, April 13, 2010

There are different ways to inherit functionality in JavaScript, including using mixins (mixing in all the properties of one object into another object) and the use of prototypes.

In Dojo, there is dojo.mixin for doing mixins, and dojo.delegate for inheriting properties via prototypes. dojo.delegate is like ECMAScript 5/Crockford's Object.create(), but with a dojo.mixin convenience call.

I really like the dojo.delegate or a Object.create+dojo.mixin combination for inheriting, but it makes it hard to call methods you override from your parent. I see this problem show up frequently with widgets, which typically inherit from each other:

var MyWidget = Object.create(BaseWidget);

//BaseWidget also defines a postCreate method,//But we want our widget to do work too.

This is an improvement as far as typing, but the implementation of dojo.declare has always scared me. My JavaScript Fu is not strong enough to follow it, and I am concerned it is actually a bit too complicated.

The second argument to the object() function allows for specifying mixins.

With two mixins, mixin1 and mixin2, the parent for MyWidget would be an object that inherits from BaseWidget with mixin1 and mixin2's properties mixed in:

var MyWidget = object("BaseWidget", [mixin1, mixin2], function (parent) { return { postCreate: function () { //Call BaseWidget's postCreate, but if it //does not have a postCreate method, mixin1's //postCreate function will be used. If mixin1 //does not have an implementation, then mixin2's //postCreate function will be used. If mixin2 does //not have an implementation an error is thrown. parent(this, "postCreate", arguments);

//Do MyWidget's postCreate work here. } };});

dojo.declare has the concept of calling a method called "constructor" if it is defined on the declared object, whenever a new object of the MyWidget type is created. I preserved that ability in object() but the property name for that function is "init" in the object() implementation.

The object() implementation is simpler than dojo.declare, but still gives easy access for calling a parent implementation of a function. It is not has powerful as dojo.declare -- dojo.declare has the concept of postscript and a preamble and even auto-chaining calls. However, I feel the simplified approach is better. It is clearer to follow the code, and to predict how it will behave. I also expect it to perform better.

I like the object() method because it uses closures and a function that accepts the parent function as an argument. Feels very JavaScripty. The prototype chain is a bit longer with the extra object.create() calls creating some intermediate objects, but I expect prototype walking is fast in JavaScript, particularly when you go to measure it in comparison to any DOM operation.

Are there ways in which the object() function is broken or insufficient? Is there a better way to do this? Or even a different way, something that does not rely on a parent reference?

I like the idea of mixing in just part of a mixin or remapping a method to fit some other API's expectations, so I can see adding support for the remapping features, similar to what Alex does in the dojo.delegate experiment. However, I am not sure how valuable conflict detection or method require support is.

I can see in large systems it would help with detecting errors sooner, but then maybe the bigger problem is the complexity of the large system. And there is a balance to forcing strictness up front over ease of use. The trait.js syntax looks fairly wordy to me, and the extra benefit of the strictness may not be realized for most web apps.

Also, I do not see an easy way to get the parent reference. It seems like you need to remap each overridden parent function you want to call to a new property name. It seems wordy, with more properties hanging off an object. And do you need to make sure you do not pick a name that is already in use by an ancestor? Seems like it could lead to a bunch of goofy names on an object.

Reusing code effectively is an interesting topic. The traits approach is newer to me, and I keep wondering if there is a better way to do it. It has been fun to experiment with alternatives.