UI Engineer - Melbourne, Australia

At first glance, they may seem like a complicated set of new technologies, but Web Components are built around a simple premise. Developers should be free to act like browser vendors, extending the vocabulary of HTML itself.

If you’re intimidated by these new technologies or haven’t experimented with them yet, this post has a very simple message for you. If you’re already familiar with HTML elements and DOM APIs, you are already an expert at Web Components.

“Components” today

To understand why Web Components are so important, we need look no further than how we’ve hacked around the lack of Web Components.

As an example, let’s run through the process of consuming a typical third-party widget.

Next, we might need to add placeholder elements to the page where our widgets will be inserted.

1

<divdata-my-widget></div>

Finally, when the DOM is ready, we reach back into the document, find the placeholder elements and instantiate our widgets:

12345

// jQuery used here for brevity...$(function(){$('[data-my-widget]').myWidget();});

This was a bit of work, but we’re still not finished.

We’ve introduced a custom widget to the page, but it is not aware of the browser’s element lifecycle. This becomes clear if we update the DOM:

1

el.innerHTML='<div data-my-widget></div>';

Since this isn’t a typical element, we now must manually instantiate any new widgets as we update the document:

1

$(el).find('[data-my-widget]').myWidget();

The most common way to avoid this constant two-step process is to completely abstract away DOM interaction. Unfortunately, that’s a pretty heavy-handed solution that usually results in widgets being tied to particular libraries or frameworks.

Component soup

Once our widgets have been instantiated, our placeholder elements have been filled with third-party markup:

This markup is now sitting in the same context as our application markup.

Its internals are visible when we traverse the DOM, and the styles for this widget exist in the same global context as our styles, leading to a high risk of style clashes. All of their classes must be carefully namespaced with my-widget- (or something similar) to avoid naming collissions.

Our code is now all mixed up with the third-party code, with no clean separation between the two. Basically, there is no encapsulation.

Hidden inside this element is its private implementation details, in the form of a document fragment:

12345

#document-fragment
<div><inputtype="text"/><button>Go</button></div>

While these elements are visible to the naked eye, they are hidden from us when we traverse the DOM or write CSS selectors. To the outside world, even when instantiated, our custom widget is still just a single element.

We finally have a simple, encapsulated widget that behaves exactly like a standard HTML element.

In the interest of time

When we talk about Web Components, we’re not talking about a single technology.

The primary goal of Web Components is to give us the encapsulation we’ve been missing. Luckily, this goal can be achieved purely with Custom Elements and Shadow DOM. So, in the interest of time, we’ll begin by focusing on these two technologies.

Rather than immediately jumping into a list of new browser features, I find it helpful for us to first reacquaint ourselves with what we already know about the native elements we’ve been consuming for years. After all, it doesn’t hurt for us to understand what we’re trying to build.

What we already know about elements

We know that elements can be instantiated through markup or JavaScript:

Breathing life into our custom element

Currently, this is a pretty useless element. Let’s give it some content:

123456789101112

// We'll now provide the second argument to 'document.register'document.register('my-element',{prototype:Object.create(HTMLElement.prototype,{createdCallback:{value:function(){this.innerHTML='<h1>ELEMENT CREATED!</h1>';}}})});

We’ve defined a createdCallback function which will run every time a new instance of the element is created.

We could also optionally define attributeChangedCallback, enteredViewCallback and leftViewCallback.

Inside our callback we can modify our new element however we like. In this case, we’ve set its innerHTML.

So far we’re able to dynamically modify the contents of our custom element, but this isn’t too different from the custom widgets of today. In order to complete the picture, we need a way to provide encapsulation to our new element by hiding its internals.

Encapsulation with Shadow DOM

We’re going to modify our createdCallback a bit.

This time, instead of setting the innerHTML directly on our custom element, we’re going to do something quite different:

In this example, you would see the words ‘SHADOW DOM!’ when looking at the page, but inspecting the DOM would reveal a single, empty <my-element /> tag. Instead of modifying the containing page, we’ve created a new shadow root inside our custom element using this.createShadowRoot().

Anything inside of this shadow root, while visible to the naked eye, is hidden from DOM APIs and CSS selectors in the containing page, maintaining the illusion that this widget is only a single element.

If we were writing a custom calendar widget, the shadow root is where our complex calendar markup would go, allowing us to expose a single tag as a simple interface to its hidden complexity.

Accessing the “light DOM”

So far, our custom element is just an empty tag, but what happens if elements were nested inside our new component? We may want a widget with similar flexibility to the <select> tag, which can contain many <option> tags.

As a working example, let’s assume the following markup.

12345

<my-element> This is the light DOM.
<i>hello</i><i>world</i></my-element>

As soon as a new shadow root is created against this custom element, its child nodes are no longer visible. We refer to these hidden child nodes as the “light DOM”. If we inspect the page or traverse the DOM we can see these hidden nodes, but the end user would have no clue these elements exist at all.

Without shadow DOM, this example would simply appear as ‘This is the light DOM. hello world’.

When we set up the shadow DOM inside the createdCallback function, we can use the new content tag to distribute elements from the light DOM into the shadow DOM:

123456789101112

createdCallback:{value:function(){varshadow=this.createShadowRoot();// The child nodes, including 'i' tags, have now disappearedshadow.innerHTML='The "i" tags from the light DOM are: '+'<content select="i" />';// Now, only the 'i' tags are visible inside the shadow DOM}}

With shadow DOM and the <content> tag, this now appears as ‘The “i” tags from the light DOM are: helloworld’. Note that the <i> tags have rendered next to each other with no whitespace.

Encapsulating styles

What’s important to understand about Shadow DOM is that we’ve created a clean separation between our widget’s markup and the outside world, known as the shadow boundary.

One powerful feature of Shadow DOM is that styles declared inside do not leak outside of the shadow boundary.

The ::part() pseudo element lets us style any element with a part attribute:

123

my-element::part(world){color:green;}

This part contract is essential in maintaining encapsulation. In the previous example, we styled the word “World”, but users of our widget would have no idea that it’s actually an em tag under the hood.

One important benefit of this system is that we’re free to dramatically change the markup inside our widget between versions, so long as the “part” attributes are still in place.

Just the beginning

Web Components finally give us a way to achieve simple, consistent, reusable, encapsulated and composable widgets, but we’re only just getting started. It’s a great time to start experimenting.

Before you can begin, you need to make sure your browser has the relevant features enabled. If you use Chrome, head to chrome://flags and enable “experimental Web Platform features”.

To target browsers that don’t have these features enabled, you can use Google’s Polymer, or Mozilla’s X-Tag.

Time to experiment

All of the functionality presented in this article is simply an exercise in emulating standard browser behaviour. We’ve been working with the browser’s native widgets for a long time, so taking the step towards writing our own isn’t as difficult as it might seem.

If you haven’t created a component before, I urge you to open up the console and experiment. Try making a custom element, then try creating a shadow root (against any element, not just Custom Elements).

This experimentation will naturally raise questions about topics not fully discussed in this article. Do we have to use strings to define the markup in our Shadow DOM? No, this is where HTML Templates come in. Can we bundle an HTML template with our component’s JavaScript? Yes, with HTML Imports.

One of the lesser known yet more surprisingly powerful features of AngularJS is the way in which it allows promises to be used directly inside views.

To better understand the benefits of this feature, we’ll first migrate a typical callback-style service to a promise-based interface.

Working with callbacks

For now we’ll sidestep a discussion on the advantages of promises compared to callbacks, and focus soley on their mechanics.

As a working example, let’s take a look at an example service with a single ‘getMessages’ function.

12345678910111213141516

varmyModule=angular.module('myModule',[]);// From this point on, we'll attach everything to 'myModule'myModule.factory('HelloWorld',function($timeout){vargetMessages=function(callback){$timeout(function(){callback(['Hello','world!']);},2000);};return{getMessages:getMessages};});

This admittedly contrived service’s ‘getMessages’ function takes a callback, then waits two seconds (using Angular’s $timeout service) before passing an array of messages to the callback function.

If we use this example service inside a controller, it looks like this:

You’ll notice that we’re now relying on Angular’s $q service (based on Kris Kowal’s Q) to create a ‘deferred’. We return the deferred’s ‘promise’ property as a public hook to its state, which is safely tucked away inside a closure.

Now that we have a promise API, we need to update the service interaction inside our controller.

Our controller is essentially the same, except ‘getMessages’ no longer accepts a callback. Instead, it takes no arguments, and returns a promise object.

As is standard for promises, it has a ‘then’ function that takes two arguments: a success callback and an error callback. For our purposes, we’ll ignore the powerful error handling capabilities that promises afford to asyncronous code.

Wiring up the view

In both the callback and promise versions of our controller, we end up with a ‘messages’ property on the scope. Which means, of course, that our view would remain unchanged.

A very simple view that only displays the messages would look like this:

123456

<bodyng-app="myModule"ng-controller="HelloCtrl"><h1>Messages</h1><ul><ling-repeat="message in messages"></li></ul></body>

In this case, we’re simply iterating over the messages that were returned from the ‘HelloWorld’ service.

Using promises directly in the view

AngularJS allows us to streamline our controller logic by placing a promise directly on the scope, rather than manually handing the resolved value in a success callback.

Our original controller logic for handling the promise was relatively verbose, considering how simple the operation is:

In the world of modern, evergreen and mobile browsers, this was a necessary move to ensure jQuery stays relevant. Of course, this split leaves plugin authors with a bit more responsibility.

Where previously we could simply require the most recent version of jQuery, we are now likely to want to support both 1.9.x and 2.x, allowing our plugins to work everywhere from IE6 to the most bleeding edge browsers.

To facilitate this, we’ll run through the creation of a plugin using the popular JavaScript build tool, Grunt. We’ll then configure our unit tests to run automatically across multiple versions of jQuery.

A simple jQuery plugin

Note: If you have an existing plugin that doesn’t use Grunt, I’d suggest running through these steps in a clean directory and porting the resultant code into your project (with some manual tweaks, of course).

The default task is run when the ‘grunt’ command is executed without any arguments:

1

$ grunt

Inspecting the test

The QUnit test for our plugin resides in ‘test/plugin.html’. Its default markup looks like this:

123456789101112131415161718192021222324252627

<!DOCTYPE html><html><head><metacharset="utf-8"><title>Plugin Test Suite</title><!-- Load local jQuery. This can be overridden with a ?jquery=___ param. --><script src="../libs/jquery-loader.js"></script><!-- Load local QUnit. --><linkrel="stylesheet"href="../libs/qunit/qunit.css"media="screen"><script src="../libs/qunit/qunit.js"></script><!-- Load local lib and tests. --><script src="../src/plugin.js"></script><script src="plugin_test.js"></script><!-- Removing access to jQuery and $. But it'll still be available as _$, if you REALLY want to mess around with jQuery in the console. REMEMBER WE ARE TESTING A PLUGIN HERE, THIS HELPS ENSURE BEST PRACTICES. REALLY. --><script>window._$=jQuery.noConflict(true);</script></head><body><divid="qunit"></div><divid="qunit-fixture"><span>lame test markup</span><span>normal test markup</span><span>awesome test markup</span></div></body></html>

This page is responsible for including jQuery, QUnit (both JavaScript and CSS), our plugin, and any helpers required. It also provides the markup needed for QUnit to generate an HTML report.

You’ll notice, the first script file included is ’../libs/jquery-loader.js’. If we look at the contents of that file, we find this:

123456789101112

(function(){// Default to the local version.varpath='../libs/jquery/jquery.js';// Get any jquery=___ param from the query string.varjqversion=location.search.match(/[?&]jquery=(.*?)(?=&|$)/);// If a version was specified, use that version from code.jquery.com.if(jqversion){path='http://code.jquery.com/jquery-'+jqversion[1]+'.js';}// This is the only time I'll ever use document.write, I promise!document.write('<script src="'+path+'"></script>');}());

By including this script, we now have the ability to add ‘?jquery=X.X.X’ to the query string, when viewing this page in the browser.

Doing this will cause a hosted version of our specified version of jQuery to be included in the page rather than the default version provided inside our project.

Preparing the build

You might think that we could simply modify the QUnit file matcher in our Gruntfile to add a query string, but this won’t work. Files must exist on the file system, and query strings aren’t part of that vocabulary.

To automatically run our tests with different query strings, we first need to host our test on a local server.

You’ll notice that this time, QUnit is accessing a URL instead of a file. This means that we’re now free to add query strings to our URLs, allowing us to automate testing across multiple versions of jQuery with ease:

Making it bulletproof

By default, this setup loads each version directly from the jQuery site. If you’re anything like me, you sometimes develop with little to no internet connectivity, and this limitation would prevent you from running the full suite.

It’s a good idea to add each major supported version of jQuery to your ‘lib/jquery’ directory (with a ‘jquery-x.x.x’ naming convention), and modify ‘libs/jquery-loader.js’ to load these local copies instead:

123456789101112

(function(){// Default to the local version.varpath='../libs/jquery/jquery.js';// Get any jquery=___ param from the query string.varjqversion=location.search.match(/[?&]jquery=(.*?)(?=&|$)/);// If a version was specified, use that version from code.jquery.com.if(jqversion){path='../libs/jquery/jquery-'+jqversion[1]+'.js';}// This is the only time I'll ever use document.write, I promise!document.write('<script src="'+path+'"></script>');}());

Testing in the cloud

As always, it’s a great idea to automatically run these tests after every push to GitHub, or on every pull request that is sent to you. To achieve this, we can leverage Travis CI with only a couple of changes to our project.

Then, set the ‘npm test’ script in your ‘package.json’ file to run our new Grunt ‘test’ task:

1234567

// Snip..."scripts":{"test":"grunt test"},// Snip...

Finally, follow the official Travis CI guide to create an account, if needed, and activate the GitHub service hook. Once completed, you’ll have the confidence of knowing that the downloadable version of your plugin can’t be broken by mistake.

Keeping it in check

Now that we have a framework for testing multiple versions, it’s worth testing the minimum jQuery version your plugin supports, and each major version above it.

At a minimum, I’d recommend testing in 1.9.x and 2.x to ensure that any differences between the two versions don’t inadvertently break your plugin. Since both versions will be developed in parallel as long as old versions of IE maintain significant market share, it’s the least we can do for our users.

Update (19 Feb 2013): This article now reflects changes made in Grunt v0.4