It is like the old proverb, “It takes a village to raise a web framework.” As we explored in the previous post, choosing a framework goes beyond the technical features of a framework and this is certainly true when it comes to the wider community, which includes considerations like licensing, how open the framework is, and where to turn for education and support.

While each web framework we have been discussing is an open source framework, there is a wide spectrum of what that actually means and how it affects the use and future development of a framework. Also, when choosing a web framework, you are to some degree, choosing to participate in a community of developers and fellow users. How this community works and behaves can also have an impact on you and your team’s ability to build the web applications you want to build.

Overview

In this post, we are going to explore each framework from three different aspects. First, we will provide an overview of the community, what commercial organizations support/fund the development of the framework, how open is the project, what sort of licensing considerations and other tidbits that are worth noting.

Then we will provide information on where to go for learning and educating yourself on the framework and finally, we will discuss where to go for support and what options there might be for commercial support.

Jump to:

Angular 2+

Community overview

Angular is highly popular, which has led to a large community with lots of resources. The official Angular resource page provides links to many of the community resources. While Google creates the most resources for Angular development, the Angular development team is a diverse group of software engineers.

Angular 2+ is licensed under the MIT License. We are not lawyers, but the MIT License is generally considered a very liberal and commercially friendly license. The copyright for Angular is held by Google, Inc. Angular does not belong to an independent open source software foundation.

Being an open source project, Angular actively looks for community contributions, providing guidelines on how to contribute. Angular has a code of conduct which is intended to ensure that members of the community treat others with respect.

There are several Angular 2+ events that are promoted on the Angular website as well as many other community groups that meet on a regular basis.

Learning and education

The main education resource is the angular.io Docs which provides the fundamentals of developing applications with Angular 2+. The resources page has an Education section which provides links to books, workshops, on-site training and online training. There are also links to community groups and podcasts listed.

As is to be expected with a popular framework, there is a wide spectrum of community and commercial organizations involved in the Angular 2+ training.

Support

The Angular 2+ team directs their users to StackOverflow and Gitter for support. Regressions, bug reports, feature requests or documentation issues are managed on GitHub.

React + Redux

Community overview

React and Redux are highly popular, which has led to a large community. For React, there is a specific Community page which provides information on support and participating in the community, conferences, and video highlights. It also links to complementary tools and examples. The React project is part of the Facebook Open Source, an initiative at Facebook to make their tools and technologies open to the world. Redux is maintained by the reactjs GitHub organization. The organization is designed to ensure that open source projects that are important to the React community have a long-term home to provide support and maintenance.

React v16.0+ is licensed under the MIT License. The copyright is held by Facebook, Inc. Versions from 0.12.0 to prior to v16.0 were released as BSD-3 Clause patents. Again, we are not lawyers, but the plus patents caused a lot of controversy in the React community, with many commercial organizations stopping the use of React based on legal advice that the plus patents was a legal risk to the using organization. This led Facebook to reconsider its position on the license and to relicense the library under the MIT License. Prior to 0.12.0, React was licensed under the Apache License 2.0.

Redux is licensed under the MIT License. The copyright is held by Dan Abramov.

React and Redux share a code of conduct which is intended to ensure that members of the community treat others with respect. React and Redux do not belong to an independent open source software foundation, something that we feel would have helped with the recent challenges around the licensing controversy.

As discussed several times in this series, React + Redux are likely not an entire set of libraries that would be used to build a web application. This means that each package will have its own license and copyright concerns.

Learning and education

The React Docs page provides a fairly extensive set of basic and advanced information on React. They have a main Tutorial which provides an in-depth tutorial covering the main aspects of React. The Redux website is wholly focused on educating people on using Redux.

The Articles and Videos page on the Facebook GitHub wiki provides links to several resources providing information on different topics and well as links to free online courses.

There are also paid online and on-site educational courses available from various commercial organizations.

Vue accepts donations through PayPal and pledges through Patreon. is their largest commercial backer, and a number of other companies also provide financial support, such as JSFiddle, Laravel, and the Shuttleworth Foundation. Vue recently joined Open Collective.

Support

Dojo 2

Community overview

Dojo 1 had a large but now shrinking community. Dojo 2 is rebuilding its community efforts from the ground up.

Dojo 2 is actively developed and encourages community participation. Dojo 2 has a Gitter.im room, a newly created forum for announcements and community support, Twitter account, and a new blog. The GitHub packages for Dojo 2 are very active with updates nearly every day. Community interest in the project has been growing as Dojo 2 reaches its release.

Dojo is BSD licensed and is completely free to use. The project is currently considering a dual license under the Apache 2 license. Dojo has a code of conduct which is intended to ensure that members of the community treat others with respect.

Dojo is a founding project of the JS Foundation, being an early champion for running open source projects in the open and not under the control of a single organization. That said, the majority of Dojo 2 development to date has been conducted by the team at SitePen.

Learning and education

As a new project, Dojo 2 is very early in their efforts to create educational material. The project website currently has an initial set of tutorials, with plans to add a reference guide, cookbook, and online video tutorials. Training workshops do not exist yet for Dojo 2, but will likely be offered by SitePen and other training providers.

Support

Dojo 2 has a new support forum for announcements and community support. The organization also encourages feedback and issues via each GitHub repo and questions via Stack Overflow.

Ember

Community overview

Ember is the most mature of the frameworks compared in this series, and has a smaller, but highly loyal and active community. Spearheaded by project co-creators Yehuda Katz and Tom Dale, Ember remains actively maintained and developed.

Ember is licensed under the MIT License. The copyright for Ember is held by Yehuda Katz, Tom Dale, and Ember.js contributors. Ember does not belong to an independent open source software foundation.

Being an open source project, Angular actively looks for community contributions, providing guidelines on how to contribute. Ember has a code of conduct which is intended to ensure that members of the community treat others with respect.

Learning and education

Ember provides in-depth guides for the framework itself and several of the main packages. The community has a solid collection of curated resources and examples on both the Ember website and on Ember Watch.

Ember encourages asking questions via Stack Overflow or the Ember forum. Ember provides a Slack channel or IRC channel (#emberjs on irc.freenode.net) where users can interact with developers and the community at large.

Several online training sites offer Ember-specific courses, such as Ember School and Lynda.

Learning and education

As an individual, the documentation, which includes guides and API docs, is probably a pretty good start. If the docs are insufficient, the Aurelia gitter channels seem to be a pretty good resource for additional questions. For a team, those would also help and of course they provide commercial training as well. There is also an Aurelia in Action book available.

Support

Summary

In general, each of the surveyed frameworks has a solid community. Due to their size and popularity, the Angular and React plus Redux communities are the largest, the Vue and Ember communities are strong, the Aurelia community is smaller but very active, and the Dojo 2 community has a solid foundation in place as they work on a final release.

Angular, React plus Redux, Ember.js, and Dojo 2 all have a diversity of contributors and authors. Vue.js and Aurelia are currently heavily reliant upon sole individuals who developed and maintain the frameworks. Only Dojo 2 is currently part of an open source foundation.

Feature

Angular 2+

React + Redux

Vue.js

Dojo 2

Ember.js

Aurelia

License

MIT

MIT or BSD (3-clause with patents)

MIT

New BSD (2-clause)

MIT

MIT

Copyright

Google

Facebook / Dan Abramov

Evan You

JS Foundation

Yehuda Katz, Tom Dale, and Ember.js contributors

Blue Spire Inc.

OSS Foundation

✗

✗

✗

✓

✗

✗

Support

✓

✓

✓

✓

✓

✓

Angular 2+

Angular 2+ has a large community, driven by its popularity. While the community is led primarily by Google, there are many other organizations and individuals involved in the community. Of the top 5 contributors to Angular, only two currently work for Google. There are many choices around learning materials as well as ways of getting support.

React + Redux

React + Redux has a large community, driven by its popularity. For React, the top 5 contributors currently work at Facebook. Redux was developed by Dan Abramov before joining Facebook and the second top contributor does not work at Facebook. There are many choices around learning materials as well as ways of getting support.

One of the biggest challenges is that the two libraries together do not provide a complete solution. Looking at the complementary tools for React will highlight just what a large ecosystem it is and how many choices a team or developer needs to make, all with varying degrees quality and options around support.

Vue.js

Vue.js has an active and thriving community with the effort spread across many organizations and companies. It may be worth considering though the risk of it having essentially a single committer in Evan You. There is no shortage of resources and community help for learning and using Vue.js, and commercial training and support options are available as well.

Dojo 2

Dojo 2 is a new project and does not yet have a large community, though Dojo 1 had a large community at various times. There is a fairly wide diversity of contributions across packages for Dojo 2, though the vast majority of the code for Dojo 2 comes from current or former employees of SitePen. The Dojo 2 team is working to provide a solid foundation for community support for the Dojo 2 release. Commercial support options are available directly from the active contributors to Dojo 2.

Ember.js

Ember has a solid and stable community and a solid history of providing a supportive community. Ember has a huge diversity of contributors, with the top 5 contributors either working for LinkedIn or Tilde. Ember provides extensive documentation, resources, and support channels, with various training and support options available.

Aurelia

Aurelia has a smaller but loyal and active community. Over 90% of the main packages are authored by Rob Eisenberg, so again it may be worth considering the risk of depending on a single individual. Official training and support options are available from both the community and commercial options are also available.

Up Next

In the next and final track of the series, we’ll summarize our thoughts after this in-depth look at frameworks and provide some final conclusions.

“Blockchain” is the newest term to enter the tech industry’s buzzword repertoire. Whether a company is processing sub-second banking transactions or transporting artisanal goat cheeses across state lines, it seems as though any company not investigating this technology, the same technology that powers infamous cryptocurrencies like Bitcoin, will surely go the way of the dodo. But what is this magical “blockchain” and how can a technology commonly associated with dark and scary cryptocurrencies ever be used to change entire industries as we know them? Well, let’s find out!

“Blockchain” Overload

Much like the term “cloud”, the term “blockchain” is heavily overloaded and oftentimes used incorrectly. Without delving into the historical origins behind the confusion around the term itself, let’s understand exactly what a blockchain is.

A blockchain is list of transactions that’s stored on many different machines. Sometimes called a “distributed ledger”, modifications to this transactional list are replicated to all connected machines or “peers” very quickly. A blockchain is implemented as a piece of software; in theory, this software contains a local database with its own replicated dataset, capable of notifying other peers when data is changed and guaranteeing that all peers maintain the same data.

While each blockchain implementation is different, most provide key pieces of functionality beyond the ability to store an identical list of transactions on multiple machines; they can also provide permission systems for who can read and write transactions, and perhaps most importantly, they can cryptographically guarantee the validity of transactions, making malicious modifications either glaringly obvious or outright impossible.

It’s important to understand that there is no single “blockchain” in computing much like there is no single “cloud”; rather, a blockchain is a peer-to-peer network with a distributed ledger that’s created by running the same software on many different computers.

What About Bitcoin?

The idea of a blockchain as a decentralized ledger that can guarantee the validity of each transaction lends itself perfectly to the use-case of financial payments. Bitcoin is a form of digital currency – a way to send, receive, and store value – that uses a blockchain for all payments.

Bitcoin’s Blockchain

The blockchain that Bitcoin is built on top of has the same architecture as described above: many individual computers run official Bitcoin software and have identical lists of every past bitcoin transaction. As new transactions occur, such as when one user sends bitcoins to another user, the transactions are validated independently by each computer running the software, using complex, computationally-expensive algorithms. These validation algorithms verify new transactions using cryptographically-secure techniques that rely on all previous transactions. So, because each peer independently and concurrently verifies new transactions, and because this verification relies on the cryptography of all past transactions, it’s very difficult for any single peer to maliciously create, destroy, or modify transactions. This mathematically-guaranteed security is why every bitcoin transaction ever made can securely be viewed publicly.

Where Do Bitcoins Originate?

It’s critical to have a basic understanding of how the bitcoin blockchain works in order to answer its most common question: “Where do bitcoins come from?”

Computers running the official Bitcoin software that verifies transactional validity are rewarded a fractional amount of bitcoin for every transaction they verify, a process commonly referred to as “mining” for bitcoins. The computation involved in verifying and adding transactions is very difficult and thus requires powerful computers that consume high amounts of electricity. By design, the difficulty of these algorithms increases as more peers install the Bitcoin software in the hopes of verifying transactions to mine bitcoin. This means that the monetary cost to mine a single bitcoin in terms of electricity and physical space (over $1,000 at the time of this post) increases as network usage increases. This important fact, coupled with inevitable investor speculation, directly drives the fiat value of a single bitcoin.

Blockchain in Business

Bitcoin has proven to be a promising method for transferring value due to certain key advantages it offers over traditional payment systems: transactions are instant and don’t involve banks, transaction records are permanent, and transactions can be proven. But while Bitcoin may get more headlines, its underlying blockchain technology is the focus of global industries for the same potential business advantages: decentralization, immutability, and provability. While there’s no shortage of vaporware reminiscent of the early dot-com era, promising to solve problems users never knew they had using the “power of the blockchain,” real use cases for blockchains in business do potentially exist outside of finance.

Decentralization

Several businesses rely on a centralized service model where data is stored or processed by a single entity. Think of popular social media services where user data is stored in central databases: users are solely reliant on the social media service for the integrity and longevity of their data. What happens if the service ceases operations, experiences a data breach, or inexplicably removes user content? By serializing certain data and storing it as transactions on a public blockchain instead of in a traditional centralized database, risks of malicious data tampering could be greatly reduced. For example, if user data is maliciously changed by one compromised peer, all other peers would reject the change, drastically limiting the effect a hacker can have on a social network. Similarly, if one peer crashed altogether, all other peers would continue operating transparently to end users.

Immutability

Another use case being actively investigated in the context of distributed ledger technology is an immutable record of events. This type of technology is far more common than it may sound; for example, consider a car sharing service where users pay to check out and drive shared cars. In a traditional database architecture, records of who checked out what car would be kept in a database table. What happens if a hacker checks out multiple cars, never returns them, and erases any database records that proved who last checked out each car? While safeguards against such hacking already exist, storing car checkout transactions on a distributed blockchain can mitigate this risk almost entirely. Because a record of old transactions is required to verify new transactions, rewriting blockchain history is exceedingly difficult in a multi-peer network.

Provability

Similarly to immutability, the architecture of a blockchain-based system inherently lends itself to a notion of provability: it can be proven that a particular transaction took place at a given time. In the context of bitcoin and other financial applications of distributed ledger technology, transactional provability isn’t just nice, it’s game changing. But the advantages offered by the ability to prove that transactions took place extend beyond finance. Consider a simple fishing supply chain that connects fisherman to fisheries and fisheries to restaurants. The restaurant owner should be paying prices for fish based on how much the fishery had to pay the fisherman. How can the restaurant owner be sure they’re paying a price that’s accurately based on what the fisheries paid? If all parties involved were part of the same blockchain-based network, and all transactions were recorded on a collective blockchain, the restaurant owner could be guaranteed that the price they pay is accurately derived from a provable transaction on the blockchain. Some blockchains even support the ability to write code that’s executed when certain transactions occur; in our example, the price that restaurants pay could be automatically calculated based on the previous transaction between fisherman and fishery.

So…profit?

While the benefits of blockchain technology sound promising, the successful and widespread application of blockchains outside of the financial sector simply hasn’t happened yet. Industries have been operating using traditional, centralized, database-driven models for years despite data breaches and service shutdowns. The promise of decentralization also comes with philosophical caveats: if middlemen are removed from common services like ride sharing, who takes responsibility when inevitable subjective issues occur? By design, data decentralization using blockchains necessitates multiple computers or “peers” to verify transactions independently of one another. This works well in the context of a cryptocurrency like bitcoin where users are incentivized to become verifiers with the promise of mining rewards; but in the context of social media as described above, how would users be incentivized to to become verifiers of blockchain-based social media transactions? If only one company is acting as the sole peer in a blockchain network, it’s as central as a traditional database. And while even companies like IBM are investigating supply chain issues similar to the contrived example described above, supply chains have been operating successfully for hundreds of years, long before blockchains could be used to verify product integrity.

Exciting blockchain-based research is focused on areas such as legal contracts, supply chain dynamics, medical records, and microeconomic industries

Despite rampant cross-industry attempts at using blockchain technology to solve problems no one has, the story of the blockchain is only beginning. Exciting blockchain-based research is focused on areas such as legal contracts, supply chain dynamics, medical records, and microeconomic industries that traditionally involve middlemen, including home sharing and data storage. While most current usage of blockchains for non-financial problems is investigatory and forced, as with any new technology, experimentation must be conducted before a technology can be correctly applied.

JS Foundation: Interledger.js

The shallow explanation of blockchain technology provided thus far has glossed over an important and limiting architectural aspect of distributed ledgers: transactions between two participants, such as when one user sends bitcoins to another user, can only take place on the same blockchain network. For example, if a user desperately needs to send 10 bitcoins to a friend, but the friend only uses another cryptocurrency like ethereum, the transfer isn’t possible. This is because the bitcoin network is made up of a different set of peer computers, each running different software than those on the ethereum network: the networks are independent and know nothing of one another. In a world where multiple cryptocurrencies are becoming more and more common and reliance on centralized banks is becoming less appealing, the ability to easily send value between architecturally-siloed networks like blockchains is essential.

The Interledger standardization effort defines a set of open protocols and provides tools for transferring value between payment networks, including two completely independent cryptocurrency blockchains. It’s agnostic to the underlying payment network and enables interoperability between any two value stores even if those stores are completely incompatible, such as sending bitcoin to a wallet on the ethereum network. Interledger shields users from having to manage value exchange rates when transferring assets between disconnected networks.

The Future

The excitement that the potential benefits of blockchain technology has sparked in almost every industry on earth is extreme, and maintaining a concrete understanding of new concepts in such an emerging landscape can be difficult. Differing explanations of the term “blockchain” and its inevitable coupling with the term “Bitcoin” only add to the confusion, but one fact remains clear: the decentralized immutability of data that a blockchain can provide, especially within a network consisting of untrusting participants, is paradigm-shifting. Projects like Interledger open a door to new type of payment applications that must no longer be confined to a single payment network. Just like the dot-com boom of the ’90s, the question for blockchain technology has shifted from “what if” to “what’s next.”

Next steps

Need help exploring how blockchain can help your organization, or leveraging blockchain or other emerging technologies within your applications? Contact us to discuss how we can help!

As we create and improve open source software, and build many applications for our customers, we’re constantly looking for things that will improve the software we create. Part of this is looking at an often dizzying array of proposed and emerging standards, and finding those that feel efficient and ready for use. Here we’ll explore five emerging web standards that we’ve started using or are strongly considering using in future work.

CSS Variables / Custom Properties

Web engineers have been using variables to create and manage complex systems of CSS for over ten years, and they continue to be one of the main features driving demand for CSS preprocessors like Sass, LESS, and Stylus. Used well, they can greatly increase the maintainability of large codebases by standardizing and consolidating all values used for colors, fonts, padding, etc. Over time, preprocessor variables have converged on a set of shared features, by design or convention:

They are prefixed: e.g. by $ or @ to prevent conflicts with existing CSS keywords

They are scoped:$bgColor defined within .container will be available in .container > .child, but not vice-versa.

Native CSS variables (or “custom properties”) have adopted all of these conventions, making the switch to native support easy and intuitive. CSS variables must be prefixed by two dashes: --, they are scoped to the selector in which they are defined and are inherited by its descendants, and they may be overridden within those descendants. For example:

There are also a few obvious differences in this example between CSS variables and preprocessor variables:

CSS variables must be wrapped in var() when used.

The prefix, -- is different from any other prefix used in existing preprocessors, so that they may be used in tandem.

CSS variables must be defined within a selector, so the closest thing to a “global” scope is :root.

Preprocessors are unaware of the DOM, so they rely on nesting for inheritance. The value of CSS variables inherit down the DOM tree in the same way any other value is inherited.

The other significant difference between preprocessor variables and CSS variables is not obvious in the above code: since they aren’t compiled down to static values, CSS variables may be updated in the browser at run-time. This means CSS variable are available to be read and written in JavaScript for use in calculations or animations. The following Codepen demonstrates how to create animated accordions using CSS variables and JavaScript:

CSS Modules

Variables are not the only concept that has leaked from JavaScript to CSS. Especially in recent years, JavaScript developers have been turning their eye to CSS organization and, in an extension of Atwood’s Law, thinking “I could do that better.” There are some good reasons to believe that:

Unlike JavaScript, CSS classes all exist within a global namespace

Resolving conflicting styles is brittle and prone to unexpected behavior in large, compiled stylesheets, or only solved with increasing specificity

Styles “leak” down the DOM hierarchy, and can break child element styles in unexpected ways.

Managing styles with JavaScript allows CSS rules to be based on run-time logic.

React in particular pioneered the idea that inline styles, controlled through JavaScript, could be the answer to all those problems (maybe not styles leaking to children, but it does reduce the number of cases where styles would conflict). However, it comes with its own set of drawbacks:

Pseudo-classes (e.g. :hover or :focus) are easily accomplished with CSS, but must be faked with JavaScript

Media queries are labor-intensive to recreate in JavaScript

Inline styles lose the ability to override with greater specificity, since they are already at the top of the specificity hierarchy

CSS Modules are in some ways the CSS developer’s comeback to JavaScript developers intruding on their turf: a way to address criticisms and improve stylesheets without doing away with them entirely. CSS modules essentially boil down to locally-scoped CSS files that may be imported into JavaScript, and compile to unique class names.

Would compile to something like this:

HTML:

CSS:

Since the classes contained in buttonComponent.css are locally scoped and in a clearly named CSS file, there is no longer any need for specific class names like .button. Instead, the recommended format is to use a single standardized “root” class name like .root or .normal, and then state-specific class names like .error, .success, or .disabled, all of which may be conditionally applied with JavaScript.

CSS modules also solve the problem of brittle style overrides by discouraging the use of multiple classes in favor of composes. The composes keyword is similar to preprocessor decorators like Sass’ @extends, except instead of compiling the styles in CSS, composes returns multiple namespaced class names in a predictable order. So, for example:

JavaScript:

HTML:

CSS also has one more powerful tool to increase the modularity of its styles: the all keyword, separate from CSS modules, can be used to reset all properties to their initial state, e.g. with .root { all: initial; }. Since this is a new CSS property rather than a pattern that relies on a compiler like Webpack or Browserify, support is still lacking in IE and Edge.

Async / Await / Can make your code great

JavaScript has long Promised to improve the handling of asynchronous code, with increasing success and the occasional (caught) error. A callback to the days before promises might look like the following nested cone of doom:

The chained syntax is clearly cleaner and more fetching than the earlier pyramid of passed-in callbacks and error handlers. With async functions, however, developers need no longer await the day when writing asynchronous code will be as clear and intuitive as synchronous code. The initial example using async/await would look like this:

Each await will pause code execution until its promise is resolved, and the whole set of code can be wrapped in a try/catch block, just as with synchronous code. The most important points to remember are that await may only be used inside an async function designated by that keyword, and the async function itself is asynchronous and will not block surrounding code.

While the triple promise example shows how three promises can be executed one after the other, Promise.all and Promise.race can be used in conjunction with async/await to run them concurrently:

Technically, await does not even need to be passed a promise, since it will wrap any non-promise value in Promise.resolve. Together with the fact that any promise resolution is pushed to the end of the call stack, this can result in some slightly odd but fun possibilities:

blockingElements / inert

Managing focus has been and continues to be a poorly-solved problem for any developer who has needed to create a modal and cares about accessibility. Focus should never be allowed to enter hidden or obscured DOM elements, but creating the proper behavior is painful and labor-intensive. The two basic options are to listen to the focus event and hijack focus whenever it attempts to leave the modal, or to manually remove all non-modal elements from the focus order by setting tabindex="-1". Both solutions usually end up using a large, fragile DOM query for focusable elements somewhere in their code:

Even after focus has been dealt with, hidden sections should ideally have aria-hidden set to true, so it will not be read by assistive technology like screen readers.

Two specification proposals would solve the modal problem entirely: inert and blockingElements. The first, the HTML attribute inert, would remove a DOM tree from the focus order (as if all focusable elements received tabindex="-1"), as well as hiding it from assistive technology.

blockingElements would do the almost exact opposite: expose a stack of “blocking elements” that would effectively make all other DOM trees inert. As an example, if I were to have the following DOM structure:

Modal content

Other content, including links/buttons/etc

To open the dialog, I would remove the inert attribute and call document.$blockingElements.push(document.querySelector('.modal')), which would render not only direct sibling trees inert, but also siblings of parents and ancestors.

IntersectionObserver

Watching for elements to scroll into view has long been the province of a proliferation of scrolling plugins using some sort of (hopefully throttled) scroll event listener. Now, the IntersectionObserver allows developers to create an observer with options and a callback to watch for elements scrolling into view with only vanilla JavaScript.

The IntersectionObserver is similar to other DOM observers like the MutationObserver, in that you create it with a callback and options, then call .observe on a DOM element. A simple implementation that updates every time the element’s intersection with the viewport increases by 10% might look like this:

root: the element containing the scroll area (falls back to the document’s viewport if null)

rootMargin: can grow or shrink the area around the root used to compute intersections

threshold: takes an array of numbers indicating at which percentages of the target element’s visibility the callback should fire. E.g. [0, 0.5, 1] would fire when the element passes the 0%, 50%, and 100% visibility marks.

To cease observing a particular element — advisable if the callback is only needed the first time the element is scrolled past, or if it’s being used to watch a large number of elements scrolling into and out of view, simply call observer.unobserve(myElement);. To disconnect the entire observer, do observer.disconnect();.

A good use case for using unobserve() with IntersectionObserver would be a script to lazy-load images as they scroll into view:

Previously on Web Frameworks, we looked at how various frameworks deal with the concept of applications. Akin to listening to the whole album, we got a sense of how the frameworks pull it all together. In this post, we explore what are common types of applications and how the frameworks we are considering might work in those use cases. If you are going to throw a party, you want to know if your favorite band is going to set the right mood.

In this post, we are looking at the aspects of mobile applications, content management or portals, consumer-facing applications, and business applications. We will discuss the common needs of each of these and then how the compared frameworks meet those needs.

Mobile

Mobile first is a buzzword that entered the common lexicon of applications, not only web applications. Like many other buzzwords, it oversimplifies several things, but it does make it easy for people to use the term. For web applications, this largely falls into three main concepts.

We have the hybrid or embedded application, where the application’s outer shell is a native container. Enhanced native integration is accomplished via Apache Cordova (or derivative distributions like Adobe PhoneGap) which allows you to write all or parts of your native application using web technologies to create an application.

The second major concept is the Progressive Web Application which was a concept originally coined by Google but has been largely adopted by the community. At its heart, PWAs are a set of best practices, fully leveraging some of the more modern standards available in web browsers on smart devices. These standards help ensure that users can experience a web application in a way is performant in constrained environments, provide functionality when not online, and deliver the benefits of an installed native application. PWA supports progressive enhancement over time.

Finally, there is the concept of Server Side Rendering (SSR). This allows an application to be instantiated server-side, and certain aspects are calculated before the HTML, CSS, and JavaScript are sent to the client device. In situations where there are constrained resources, like bandwidth or CPU speed, having this work offloaded to the server can help an application be more responsive to the user, where the user gets visible information from the application early, while the rest of the application is loading in the background.

Part of the reason why we are dissecting these different use cases is that many times, concepts can be over-hyped and are not really germane to the problems that you are trying to solve. One of those concepts that could be considered over-hyped is the concept of a responsive application, one that scales from a mobile device to a desktop application, adapting the UX to meet the constraints of those different platforms. Mobile, tablet, and desktop user experiences are quite different. Pushing all the code to manage that in a web application often makes the application larger and more complex than an application that is built specifically for a platform. Users will not be using the same application from their mobile phone and running it on their desktop, and the client application environment can be determined at load time. Having the wrong approach to responsive applications can easily sacrifice the user experience for the sake of developer ease.

Because of this risk, we should be cautious about how we build our applications. Are we using the right tools and right approaches for the problems we are trying to solve? This is why, in part, we are presenting four major use cases. What the user experience on mobile is can be very distinct to what an experience would be on a desktop business application and may require different approaches, or even different tools.

Consumer applications

We would define consumer applications as a step above content management systems or portals. You may be developing a solution, like a banking website, or the latest social media craze to topple Facebook, or something more revolutionary. We consider these applications to be a situation where you have a single authenticated user who is interacting with processes that are heavily tailored to that user, but the business process is fairly well known and straightforward, where the consumer is led down a generally linear path.

These applications tend to focus on processes that are easy to self-discover for the user, with little or no training on what they are performing. They are highly suitable for rapid evolutionary changes, since a user might only perform the process a few times a year, and the process is fairly straightforward and linear.

In order to achieve these consumer application goals, there are some typical patterns that frameworks can offer to make this easy:

Single Page Applications (SPA): the user would be able to utilize the application without sensing that there is a full page reload between interactions, though the URL in the browser may be updated to reflect a particular location within the application.

Authenticating users: a common use case and while it might be heavily dependent on the back-end systems, many consumer-facing applications need to leverage some of the open authentication platforms, like OAuth.

Validation: Accepting user input and validating it, before it is sent to a back-end service, a common feature of most consumer applications. Providing an easy mechanism to describe this validation can help simply application creation.

Server interaction: Less than a decade ago, everything about libraries was focused on AJAX (Asynchronous JavaScript And XML), which was the concept of sending requests, from the browser to a server to interact with a service or retrieve data. There are clearly many other technologies today, but this basic interaction is still fundamental to building most applications.

Accessibility: While accessibility (a11y) might not be a top concern of those making consumer applications, it should be. The web was founded on openness and allowing everyone to access your information. We should all strive to make our applications available to everyone.

Business applications

We consider business applications to be the heavy lifting of web applications, and the unsung heroes of the enterprise. Business applications typically present many various business processes and allow the expert user to interact in an open-ended or unstructured way with the business processes. These expert users have been trained on the business processes they perform or manage, and they often would require training on the systems with which they interact.

These web applications often have complex requirements, are maintained over longer periods of time, and tend to be very sensitive to changes. Something as simple as moving the location of a button can incense your users as you have affected their user journey to their annoyance. Changes have to be socialized extensively with the user community.

While exploring this type of applications, we will look for any special sauce in dealing with these more advanced needs. We will look at many of the features we covered in more detail back in the Foundations and Applications blog posts, but apply a lens of pragmatism. We will also look at two common functional features that you find in business applications, charts and grids, which are often used to interact with and display complex data.

In the following sections, we summarize features of which we use the following convention:

✓ – Feature is present and reasonably usable.

✗ – Feature is not provided by the framework, though there may be third-party solutions that solve the problem.

❓ – Feature is partially present, or the framework could be capable of providing the feature, but there are some questions in our minds about the viability of the feature or how straightforward it is to make it function as expected.

– Feature is on the framework’s roadmap (and we believe it will arrive and will not turn into Duke Nukem Forever).

Jump to:

Angular 2+

Mobile

Out of the box, Angular 2+ does not offer any particular focus on mobile, though Material 2 will perform optimized layout on smaller screen devices, as well components that are designed to handle mobile input, like swipe gestures.

As mentioned in the previous blog post, Angular 2+ has a dedicated mobile site which gives inaccurate and outdated information on how to create mobile PWA-compliant mobile applications. While it seems possible to create compliant PWA applications with Angular 2+, the tooling does not effectively support it at this point with confusing and contradictory issues about how the team intends to resolve this.

Angular 2+ offers a solution for Server Side Rendering called Angular Universal though this seems to suffer from the same challenge as PWAs, including the quick start guide on the website not working with current versions of Angular. There are several open issues that appear to indicate that even basic setup currently feels problematic.

If your development is mobile-focused, and you like Angular 2+, then you would likely be better suited to utilize Ionic, which is built on top of Angular 2+, offers out of the box Cordova support, and is entirely focused on mobile applications.

Feature

Presence

Notes

PWA Support

❓

The out of the box experience is currently broken

Server Side Rendering

❓

Requires significant effort to get this working

Mobile Components/UX

✓

Assuming you use Material 2

Cordova Support

✗

Ionic 2, built on Angular 2+ is a possibility

Consumer applications

Angular 2+, much like Angular 1 that preceded it, is focused on providing single page applications. It allows you to define routes that trigger the application to display a particular view or activate a particular service. This allows linking to parts of an application, and the whole system is designed around presenting an application that does not require a page re-load during the user experience.

Angular 2+ does not come with any out of the box authentication integrations, though there are several third-party tools that leverage the Angular 2+ router to hook into various standard authentication platforms including OAuth.

Angular 2+ provides an HttpClient module that builds on top of XMLHttpRequest to make it easier to interact with servers and remote services.

As we discussed in the User Interface post, Angular 2+ does not inhibit accessibility, but the standard package does not really promote it either. Material 2 provides components that by themselves are accessible and does give some guidance on how to make whole applications that are accessible.

Feature

Presence

Notes

Single Page Application/Routing

✓

The first router release with Angular 2 has been deprecated and replaced with a new router.

Authentication

✗

There are several proven third-party integrations.

Data Validation

✓

An explicit pattern for providing these features.

Server Interaction

✓

A decent abstraction to interact with back-end services.

a11y

✓

If you use Material 2, other UI frameworks built on Angular 2 would need to be vetted.

Business applications

Angular 2+ provides internationalization (i18n) and a level of localization (l10n) as part of the base framework and provides reasonable guidance on how to leverage the feature.

As far as data handling, there are no tools that are part of the official Angular project, though there are various third-party solutions to provide data abstraction and integration. The third-party ng-charts package wraps chart.js and makes it easy to integrate it into an Angular 2+ project. There are also various third-party grids available such as ag-grid-angular and ng2-grid, though we have concerns about how effectively they handle large datasets.

Feature

Presence

Notes

i18n/l10n

✓

Most general use cases can be met with the provided functionality.

Data Handling

✗

There are various third-party solutions of varying quality.

Charting

✗

There are decent third-party solutions though.

Grid

✗

There are third-party solutions, but their suitability for large data sets is questionable.

React + Redux

Mobile

As you likely have noticed if you have read this series to this point, React + Redux are minimalistic libraries focus on solving the problems they were designed to solve, versus providing a holistic solution. Specifically, when it comes to PWA, there is no direct support, though you would not directly expect such direct support from React or Redux.

When it comes to Server Side Rendering, Redux specifically addresses the considerations that are required for being able to set up an application instance on the server and extract the sort of information needed to bootstrap an application.

There is no official support for Cordova from React and Redux, though there are many tutorials and boilerplate applications available from third-parties and there are no barriers to being able to build applications in Cordova with React + Redux.

It is worth mentioning React Native, which allows you to build native applications using JavaScript. This does not allow you to create hybrid or a mobile web application but instead create a native iOS or Android application that is indistinguishable from one built with Object C, Swift, or Java.

Feature

Presence

Notes

PWA Support

✗

There are many guides available on how to achieve this.

Server Side Rendering

✓

There is official information on how to accomplish this.

Mobile Components/UX

✓

While you can really build any type of components with React, we consider React Native, using the native controls of mobile devices, as one type of full solution for a mobile UX.

Cordova Support

✗

Though there are no barriers to supporting Cordova.

Consumer applications

When it comes to single page applications and routing with React, it gets a little complicated. Being fairly simplistic and straightforward libraries, with a community that builds many interesting things with React + Redux, there is significant innovation. One of those is a third-party React Router which has become the de facto router for React applications. However, keeping state for the router, it needed to be bound to the state store of Redux, which led to react-router-redux being managed by the React team and therefore being official. This was then moved into a package that is part of React Router, making it not quite official? Of course, as is common in the React ecosystem, if you are not happy with these solutions, there are plenty of other variations available.

This is the biggest challenge when it comes to React + Redux, there are so many patterns, tools, conventions, and approaches, that unless you lay down your standards early in the development process, your team will develop something that only a few people will understand and be able to maintain. Even projects developed with the same technologies might struggle to be compatible and consistent. You also need to be confident that the engineers setting the conventions and standards know what they are doing.

As you might expect, when it comes to authentication, there is no official (or pseudo-official) solution to this use case, though there are many third-party examples and solutions for various authentication platforms.

Redux’s simplistic approach to managing state with actions are called reducers, which mutate state, which then provides data to components, making the task of validating user input easy, even though there are no out of the box higher order validation schemes. This leaves the decision about how validation works up to each developer. There are third-party higher order frameworks for providing form or input type validation or keeping back-end data consistent with application state.

React + Redux by themselves do not offer server interaction APIs. They expect that developers will either leverage an abstraction and wire it into the application state, or just leverage the native browser APIs. The same goes for accessibility. The expectation is that a developer would leverage knowledge of a11y to ensure that components and applications are accessible, or that teams would leverage libraries of components that are already created to be accessible.

Feature

Presence

Notes

Single Page Application/Routing

✓

There are many options and choices.

Authentication

✗

Nothing extra out of the box, but many references and examples.

Data Validation

✓

With a one way flow of data, validating data becomes a straightforward pattern.

Server Interaction

✗

Expected to bring your own abstraction or native APIs.

a11y

✗

Expected to build your own components or leverage a component library.

Business applications

As mentioned in the foundations post, React + Redux don’t provide out of the box support for i18n/l10n, though there are some popular third-party solutions.

For charting, again, there is nothing out of the box, though there are dedicated third-party projects that wrap other charting libraries as React components. Notable are Uber’s react-vis, Victory and the pseudo-official react-chartjs which provides chart.js charts as React components.

For data grids, there is nothing provided out of the box. At the risk of sounding like a broken record, there are various third-party solutions, of which it is difficult to assess how effective they would be in dealing with large datasets.

When it comes to data handling, there is the Facebook-related project called GraphQL. While it is not specific to React + Redux, and can be integrated with many other projects, it is a solution to data handling that fits very well with a React + Redux solution. In addition, it solves one of the biggest challenges of client-side development, especially in a micro-services architecture, in that back-end services are seldom written with the needs of a front-end in mind. GraphQL allows a client application to easily describe the shape and structure of the data it needs and where to get it from, and a GraphQL server will orchestrate retrieving that data and shaping in the way to be efficiently consumed by the front-end application. For those willing to introduce this middle tier, GraphQL can be a hugely powerful tool to accelerate front-end development.

Feature

Presence

Notes

i18n/l10n

✗

Expected to bring your own solution.

Data Handling

✓

Giving credit to React + Redux for GraphQL, but it can be used with many other frameworks.

Charting

✗

Again, bring your own, but there appear to be solid third-party solutions.

Grid

✗

Bring your own, but not as robust of a community of solutions as charts.

Vue.js

Mobile

Vue.js has an official PWA template that is being maintained, also with additional information on how to leverage the template.

As we discussed in the UI blog post in this series, Vue.js does not have an opinionated library of components, which also means the project does not have a library of mobile components or offer anything specifically in a way of dealing with the challenges of a mobile user experience. There are several third-party component libraries and frameworks that are specifically built for mobile or are designed to be responsive to being on a mobile or desktop platform.

While Vue.js doesn’t offer anything extra to directly support Cordova, there is significant community support. Furthermore, some of the UI frameworks are tailored to mobile make integration with Cordova straightforward.

Feature

Presence

Notes

PWA Support

✓

Everything is available in the template, ready to go.

Server Side Rendering

✓

Extensive official information available, plus official helper packages to make it easy to manage an SSR application.

Mobile Components/UX

✗

Bring your own, but several mature third-party solutions based on Vue.js.

Cordova Support

✗

Bring your own, but many examples and third-party solutions.

Consumer applications

Vue.js offers an official router which integrates well into Vue.js applications. It provides the tools and mechanisms necessary to build single page applications with Vue.js.

Vue.js does not offer anything official for authentication, but there are several complete third-party packages as well as other examples of integrating to open authentication platforms available.

Vue.js does not have any strict concepts around data validation. Traditionally the framework follows the MVVM pattern, where the ViewModel would mediate between the View and the Model, validating information. If using Vuex and an application state container, there are some patterns suggested for managing validation, though they feel like loose concepts. There are a couple of third-party solutions which provide a more rigorous system for validating data within models or a state container.

Prior to Vue.js 2, vue-resource provided an abstraction to XHR to manage requests to the back-end. With Vue.js 2, the team felt that this was not core to the Vue.js project and another organization now maintains it. Many Vue.js projects have started using the axios HTTP request abstraction library and there are many examples of how to utilize it within a Vue.js application.

Vue.js is unopinionated when it comes to the user interface, which means it is unopinionated about accessibility concerns.

Feature

Presence

Notes

Single Page Application/Routing

✓

Has a well-developed concept of a router and how to create and manage single page applications.

Authentication

✗

A bring your own situation.

Data Validation

❓

Some level of patterns suggested, but it feels a bit confusing and left to the developer to solve.

Server Interaction

✗

Previously included, now bring your own.

a11y

✗

Something to evaluate when using a third-party component library.

Business applications

As we mentioned in the Foundations article, Vue.js does not provide any out of the box i18n or l10n features. The same applies to data handling and abstraction, charting and grids. For all of these, there are third-party solutions, with most of them curated at Awesome Vue.

Dojo 2

Mobile

Initial support for PWA is provided by Dojo 2 to easily support the PWA standards. In addition, Server Side Rendering support is available. Early changes to support server side rendering of widgets are in place. Currently, the missing functionality is the ability create an application state server side, extract it, and then hydrate it on the client-side.

Existing widgets that are part of the @dojo/widgets are mobile ready and responsive in a mobile context. There are further plans to make it easy to abstract mobile input, like touch, to make it easier to author widgets that are mobile ready.

There are currently no plans for Dojo 2 to directly support Cordova, though there are no known limitations to utilizing Dojo 2 in a Cordova environment. It is likely that a community project will help integrate Dojo 2 tightly with Cordova.

Feature

Presence

Notes

PWA Support

✓

Initial PWA support available.

Server Side Rendering

✓

From a rendering perspective, complete; from a state management perspective, forthcoming.

Mobile Components/UX

✓

Existing widgets are mobile ready, more API coming soon to make it easier to author new mobile ready widgets.

Cordova Support

✗

No current or planned support.

Consumer applications

There is no out of the box solution to authentication, nor are there currently guides or references to integrating to open authentication platforms. It is likely this will change as Dojo 2 gets closer to a final release.

Dojo 2 provides an efficient mechanism for managing application state, though the mechanism and patterns for data validation are still being refined.

Dojo 2 has an abstraction library for XHR, fetch and Node.js http requests, called request and is available in the @dojo/core package. It is a promised-based and extensible.

All of the Dojo 2 widgets are designed to be fully accessible. The out of the box Dojo 2 theme is designed to be accessible as well, adhering to AA standard contrast levels. There is also expected to be further guidance on how to author additional widgets that are accessible.

Feature

Presence

Notes

Single Page Application/Routing

✓

Available.

Authentication

✗

No current plans to provide out of the box.

Data Validation

Patterns still to be refined.

Server Interaction

✓

The provided abstraction library should fit most use cases.

a11y

✓

Will provide an out of the box solution to make applications accessible.

Business applications

Dojo 2 provides a fairly robust solution for i18n and l10n provided in the @dojo/i18n package. Using the Dojo 2 build tooling, it allows locales to be built into the bundle and allows other locales to be included to be dynamically loaded at run-time.

Dojo 2 provides a robust, reactive solution around data handling in @dojo/stores with a state container and a unidirectional flow of data.

Dojo 2 did some proof of concepts with data visualization, but have not progressed it further at this point. The team is evaluating the priority of this feature and if wrapping another charting library might be the most effective way of delivering the functionality. Some sort of solution is in the medium term roadmap beyond Dojo 2.

Dojo 2 has plans for an updated dgrid. It takes many of the concepts from dgrid 1 and brings them into a Dojo 2 pattern. The plan is to provide a modern, TypeScript based, grid that is highly performant, feature rich, extensible, and like all Dojo 2 widgets, easily exported as a web component, after the Dojo 2 release is complete.

Feature

Presence

Notes

i18n/l10n

✓

Dojo 2 provides a full solution for i18n and l10n.

Data Handling

✓

Dojo 2 provides a robust data handling solution.

Charting

❓

Exactly how data visualization will materialize in Dojo 2 is unclear at this point.

Grid

Dojo 2 has an alpha of a datagrid and is working on getting the grid feature complete and released after Dojo 2.0.

Ember

Mobile

Ember.js does not offer direct support for creating PWA compliant applications, though there are a few third-party tutorials and solutions explaining how to add PWA features to an Ember.js application.

Ember.js provides a server-side rendering solution as a separate project called Ember Fastboot. It provides a framework for running Ember.js applications on a server and transporting their render and state to a client application. According to the project’s readme, because there is such a complexity and variety of third-party plugins and patterns for Ember.js, the applications that work easily with Fastboot are limited, though the team is continuing to try to improve this situation.

Ember.js does not provide any out of the box UI elements, therefore there are no mobile-ready components. One recent development though is that Ember.js has broken out their rendering engine into a separate project called Glimmer. This essentially provides a framework on which to build lightweight, fast UI components that integrate well into Ember.js applications. Components built on top of Glimmer are far more likely to perform better on processor and bandwidth constrained platforms like mobile devices.

There is no direct support for Cordova with Ember.js. Like other frameworks though, there are extensive third-party solutions and examples of how to build Cordova applications with Ember.js.

Feature

Presence

Notes

PWA Support

✗

No direct support, though examples abound.

Server Side Rendering

✓

There is a complete solution, though it may be challenging to adapt existing applications to SSR.

Mobile Components/UX

✗

There are some component libraries for Ember.js that are mobile-focused and ready, in addition, Glimmer is a library for building performant mobile-ready components.

Cordova Support

✗

No direct support, though community examples are available.

Consumer applications

Part of the core of an Ember.js application is its router, and defining an application’s routes is one of the core activities when creating an application. The Ember.js router provides all the features expected with a full-featured routing solution.

Ember.js does not provide authentication, though there is a very rich set of third-party solutions for Ember applications, and authentication is a common use case that is covered.

Because Ember.js applications traditionally follow a Model View Controller architecture, the Controller is usually engaged in data validation from the View. In addition to this, Ember Data is a rich library that makes it easier to manage data in an application model. Because of this, most server interaction occurs via an adapter. Ember Data provides a default adapter of JSONAPIAdapter which adheres to the JSON-API specification for interacting with JSON services.

Because Ember.js doesn’t focus on providing components as a core part of the framework, it does not directly concern itself with accessibility. On the other hand, the wider Ember community, with the support of the core Ember team, have created the Ember A11y project. This aims to get Ember projects accessible by default, as well as provides additional tools to help developers build accessible Ember applications.

Feature

Presence

Notes

Single Page Application/Routing

✓

A fundamental part of an Ember application.

Authentication

✗

Plenty of third-party implementations.

Data Validation

✓

A robust mechanism for handling application data.

Server Interaction

✓

Defaults provided, additional adaptors available.

a11y

Not a direct concern, but it feels like it is still a major focus of the project.

Business applications

As mentioned in the Foundations article, internationalization and localization are not provided out of the box with Ember.js. There are at least two significant projects though that can provide it, ember-i18n and one from Yahoo!, ember-intl, which is part of Format.js.

Ember Data is more than just data validation and server interaction. It provides a higher order system to be able to manage remote data resources and describe APIs with the JSON-API specification.

Because Ember does not provide out of the box visual components, it does not provide charts or a grid, though there are extensive third-party solutions available. For charting, there are Ember specific solutions as well as wrapped solutions providing other charting solutions in an Ember way. There are many many options when it comes to grids, of highly varying quality.

Feature

Presence

Notes

i18n/l10n

✗

Available through third-party libraries.

Data Handling

✓

Very well rounded solution.

Charting

✗

A wide range of third-party options.

Grid

✗

Several varied third-party options of varying degrees of quality.

Aurelia

Mobile

Aurelia does not currently offer out of the box PWA support. There is a currently outstanding feature request to add this to the Aurelia CLI, though it is unclear when this support will be completed. There are also a few third-party tutorials about how to ensure your Aurelia application is PWA enabled.

As of January 2017, Server-Side Rendering is on the Aurelia roadmap to be a focus in 2017. At the time of this writing, a comprehensive SSR solution does not appear to be finished.

While Aurelia UX offers host detection (Cordova versus the Web) and platform detection (Desktop, iOS, Android), none of the UX controls are designed specifically for mobile, but there is an intent to have the Aurelia UX components work effectively on mobile. Following this information assists developers in creating mobile ready Aurelia widgets.

Feature

Presence

Notes

PWA Support

Have a feature request open and under consideration.

Server Side Rendering

Officially on the project roadmap for 2017.

Mobile Components/UX

Aurelia UX does not have mobile-ready components, though the team states it is part of their roadmap.

Cordova Support

✗

Has the plumbing to build upon, but do not directly support it.

Consumer applications

Aurelia was conceived as a Single Page Application framework and its router is designed to be a powerful client-side router.

Aurelia does not come with any out of the box authentication integrations. There are some third-party services available, as well as tutorials from core team members that demonstrate how to create an authentication service.

Aurelia has a specific part of its platform for data validation named aurelia-validation. It uses functional chaining to provide a way of managing the validation of data and wiring that into an Aurelia application.

Aurelia has abstractions for requesting data from a server. For XHR it is called aurelia-http-client and a Fetch API abstraction named aurelia-fetch-client. The fetch client, with a polyfill, is currently recommended over the HTTP client.

Aurelia UX appears to have backlog items to ensure that those components are fully accessible, as well a stated part of the Aurelia roadmap.

Feature

Presence

Notes

Single Page Application/Routing

✓

A feature-rich router is part of the platform.

Authentication

✗

Third-party solutions and effective tutorials.

Data Validation

✓

A dedicated part of the platform to help ensure the integrity of data.

Server Interaction

✓

Both XHR and Fetch abstractions are available.

a11y

Part of their roadmap, though current components are not fully accessible.

Business applications

Provides an i18n library that leverages i18next under the hood. i18next provides both the ability to deal with translations and deals with some other l10n features, like pluralization and a sufficiently robust API for building some higher order localizations like number formatting.

Aurelia does not provide any further abstractions to accessing data than what is provided in the HTTP and Fetch client libraries. It is logical that services could be created that provide a higher order abstraction of data sources.

Aurelia does not provide any out of the box charting or datagrid components. There are some third-party projects providing this functionality as well as wrappers around other libraries that make it easy to integrate.

Feature

Presence

Notes

i18n/l10n

✓

An official integration with i18next is available.

Data Handling

✗

Could be built with the validation library into reusable services.

Charting

✗

Third-party solutions available.

Grid

✗

Third-party solutions available.

Summary

Mobile

Feature

Angular 2+

React + Redux

Vue.js

Dojo 2

Ember.js

Aurelia

PWA Support

❓

✗

✓

✓

✗

Server Side Rendering

❓

✓

✓

✓

✓

Mobile Components/UX

✓

✓

✗

✓

✗

Cordova Support

✗

✗

✗

✗

✗

✗

These frameworks are a bit of a mixed bag when it comes to mobile development. Many can be used, but if the predominant use case is a mobile web application, there are other frameworks and solutions that are likely more suitable. If your use case is to provide a more complex desktop application which also works on mobile, then most frameworks are likely suitable for that.

Consumer applications

Feature

Angular 2+

React + Redux

Vue.js

Dojo 2

Ember.js

Aurelia

Single Page Application/Routing

✓

✓

✓

✓

✓

✓

Authentication

✗

✗

✗

✗

✗

✗

Data Validation

✓

✓

❓

✓

✓

Server Interaction

✓

✗

✗

✓

✓

✓

a11y

✓

✗

✗

✓

It was surprising that many common use cases are catered for by these frameworks, except user authentication. Putting that aside, the frameworks were generally usable to build a consumer application, though almost all of the reviewed frameworks could improve the out of the box accessibility experience.

Business applications

Feature

Angular 2+

React + Redux

Vue.js

Dojo 2

Ember.js

Aurelia

i18n/l10n

✓

✗

✗

✓

✗

✓

Data Handling

✗

✓

✗

✓

✓

✗

Charting

✗

✗

✗

❓

✗

✗

Grid

✗

✗

✗

✗

✗

It appears only Dojo 2 focuses on the use cases that are of greater importance in building business applications. Dojo 2 is clearly focused on business application scenarios. Ember.js would likely be the next one up, as it focuses on managing complex data interactions. At the same time, GraphQL, which we attributed to React + Redux, does a lot of the heavy lifting needed for a client application and can be used with all of the reviewed frameworks.

Up next

So maybe we have helped you narrow down what album you want to pick, and you have selected an album that is going to help you set the right mood for your party. Actually getting the record to play though is our next step. In the next post, we will delve into how you would actually develop applications with these frameworks.

Applications built with web technologies, something that was a curiosity a few short years ago, have emerged onto the scene as a must have for most organizations. Transcending websites and providing users with a more open and unbounded experience, web applications are everywhere. Likely the main reason you are reading this series is to determine how modern frameworks enable you to build web applications.

In previous posts in this series, we have discovered how our frameworks interact with us and how to put the basics together, but now it is time to really take in the whole album. In this post, we are going to explore how frameworks conceive of an application. That said, applications are never standalone. They will almost invariably need to get and send information and we may need them to work offline.

Approach

The term application can mean different things to different people and frameworks, so we need to understand what a framework considers to be an application. We will also explain the major architectural components of an application in the framework and if the framework enforces any particular application model.

State management

Applications have state, which changes as users interact with an application. Frameworks manage state in different ways. Some frameworks have very strong opinions about how to manage application state, while others focus more on creating the application interface and leave the how up to the developer.

Data integration and persistence

Most web applications will need to retrieve data from another system, often from the web server that hosts the application code. Retrieving business data from a server is a very common task for a web application. In addition to being able to retrieve the data, there is the need to send the data back to the server or even persist the data locally. We will review how the frameworks provide help for these common use cases.

Service integration and orchestration

While a well-designed service architecture should make it easy to create a front-end application, the reality often falls far short of that expectation. Whether trying to integrate calls to several back-end services, or facing off against a microservices architecture which requires the client to orchestrate calls to the various services, persisting data from one call to the next, our web applications end up doing more than might be ideal. Because of the rapid rate of change that a web application may accommodate, enterprises often put the application in the driver’s seat of managing the business process. If that ends up being the case, it is good to know what, if any, help the web framework can provide for us.

Offline

While Progressive Web Applications (PWA) define more of a set of good practices coupled with a few standards, one of the key requirements is offline capability. While the industry tends to focus on this feature for mobile, there are many use cases where not having reliable connectivity to the internet is an important application consideration. We will look at this particular aspect to see what these frameworks offer.

Jump to:

Angular 2+

Approach

Angular 2+ approaches applications with the main focus on two-way data binding between the component and the template. In Angular 2+ terms, a component controls a patch of the screen called a view. Angular 2+ templates are dynamic, and attributes and directives in the template augment the run-time behavior.

Angular 2+ has the concept of services, an abstract concept of any sort of class that provides some sort of service. Angular 2+ encourages abstracting out services from components but does not enforce this separation.

Angular 2+ heavily relies upon the concept of dependency injection, which is a run-time situation where instances of services are injected into instances of components. Services are registered with the Injector which then supplies the services to the components.

Angular 2+ uses its own overlay on the JavaScript module system, calling these modules ngModules. Angular 2+ also relies heavily upon metadata, injected into JavaScript classes, to augment the behavior of the Angular run-time libraries.

State management

Angular 2+ does not have a distinct concept of state management. It encourages a pattern whereby services are created and injected into components. Those services can be part of an application state or other data, like configuration information, data retrieval, etc. Angular 2+ focuses much more on authoring front-end components, described as templates, versus enforcing a higher-order concept of an application.

Data integration and persistence

Angular 2+ provides the ability to communicate with RESTful services via its @angular/common/http module. This module exports a HttpClientModule which allows users to integrate this into components or create services which can be injected into components.

Outside of this module, there are no official services that provide integration in other ways and no data abstraction API. There are several third-party components which provide services that interface with IndexedDB and LocalStorage as well as integrating with things like GraphQL.

Service integration and orchestration

There are no specific tools for service integration and orchestration within Angular 2+. These high-order concepts tend to be built as services, which, when offered up as modules, can be re-used in applications.

Offline

Angular 2+ considers offline to be a mobile feature. Angular 2+ previously provided out-of-the-box support for scaffolding a mobile application with offline capabilities via a service worker, but currently, the Angular CLI does not support offline and it is not clear when and if it will be supported again. The main landing page for the feature indicates that it is alpha but the instructions recommend using an outdated version of the Angular CLI and suggest using options that are not supported on the current version of the Angular CLI.

There are several tutorials available which provide some instructions on how to incorporate offline functionality into an Angular 2+ application.

React + Redux

Approach

Redux provides the APIs for what is traditionally considered an application, while React provides the user interface. React and Redux are designed to work together or independently.

Redux adheres to a strict unidirectional flow of data. A Redux store contains the application state where changes are applied to the state with reducers, which are pure functions that do not have side effects and contain no hidden state. Therefore, given X input to a pure function, you are guaranteed to get Y output. The Redux store is called with an action that identifies which reducer should be applied and what arguments are supplied to that reducer.

The architecture is designed to reduce the amount of cross-dependency between different parts of the application and to make it possible to separate parts of the application and make it easier to test every possible permutation in isolation, reducing the need for knowledge of other sections of the application.

React is then designed to deal with a state container like Redux. React splits the concept into two different types of components, presentational and container components. Presentational components are not aware of the Redux state container, they simply render the properties that are set on their instances and invoke callbacks that are set as part of their properties when appropriate. Container components are explicitly aware of the Redux state container, subscribe to changes of the Redux state and dispatch Redux actions. These could be considered more controller-like in the lexicon of MVC type application patterns.

State management

Redux focuses on state management of the application, using the dispatched actions to change the state of the application and container components to be connected to the application’s state store. This creates an entire reactive model that causes changes to the application state to be reflected in the presentation of the application.

Connecting presentational components to the application state is usually done through a container component which interacts with the application store. For example, if you had a Todo Item List like:

Data integration and persistence

Redux’s simplistic approach to state management is nice but is limited by design. For example, actions must be objects that contain a type property. This can complicate business logic that may need to perform asynchronous actions such as fetching data from a server. This pattern of simply dispatching actions can be extended.

Middleware lets a user wrap the store’s dispatch method and add more functionality. This means that the value passed to dispatch can be a function instead of an action. That function will receive the dispatch method as an argument, which can be called from within the function after an asynchronous call, for example. The Redux middleware (thunk) can be applied using the applyMiddleware function from Redux.

There are several third-party libraries that abstract data integration further and provide abstractions that work well for interfacing with local persistent storage including IndexedDB.

Service integration and orchestration

Both React and Redux are quite focused on the specific problems they are trying to solve. They promote a more pure JavaScript approach to problems by using simple, assumed immutable data structures and rely on built-in language features and APIs, especially those found in ES6+. React and Redux might be used as a foundation for an application, but more complex business logic and interfacing with services would need to be provided, though this straightforward pattern generally makes it easy to integrate additional libraries and tools.

Offline

While neither Redux nor React provide an explicit offline capability, by containing all the application state in a single store, with the ability to insert middleware, it is easy to persist the state within the browser. Because all changes to the application state can be known, this enables the ability to add features like time travel where the changes to the application state can be stored to be replayed later, primarily for the purpose of debugging.

There are some third-party solutions that provide a more robust framework for creating and managing the Redux state store in an offline mode.

Because React + Redux lacks a specific framework, the biggest challenge comes from creating the right patterns in any container components that deal with modifying the state in an offline mode in a graceful way. This requires adopting a third-party solution or having the appropriate knowledge of how to ensure the state is kept in a consistent fashion and continues to be functional when disconnected.

Vue.js

Approach

Vue.js does not mandate a project or application structure. For applications, Vue.js provides a few key areas of functionality which are combined together to form an application:

The main focus of Vue.js is to provide a JavaScript-based, flexible, MVVM library, where Vue.js provides the ViewModel with two-way data bindings between JavaScript objects and the DOM view.

The vue-cli package can scaffold projects for a variety of purposes.

State management

For simple applications, the state of a Vue.js application will just be the values of all the data properties in its components. Components that need to share state can use shared objects or data stores, and a non-rendered component can be used as a global event bus.

For more complex scenarios, Vue.js provides a reactive state library inspired by Flux called Vuex. This library provides a central store for the entire application state and the means to update the state in a consistent and predictable manner. For large applications, Vuex allows a store to be separated into modules, where each module is essentially a self-contained store with its own state and mutation logic.

Data integration and persistence

Neither the core Vue.js library nor Vuex provides any direct support for data persistence. There is an official Vue.js library providing Firebase bindings, and a number of third-party libraries providing access to local storage, RESTful resources, and more.

Vue.js previously recommended vue-resource as the official HTTP client library for Vue.js, but the Vue.js team eventually concluded that this type of functionality was really outside the scope of Vue.js, and no longer recommend any particular solution.

Service integration and orchestration

Vue.js focuses on providing libraries to create applications. It does not provide any official pattern for integrating with services or orchestration of services within applications. Vue.js assumes that developers may express whatever logic they prefer within the ViewModel.

Offline

Vue.js does not offer any specific offline support. However, individual Vue.js component states may be serialized, as can the entire Vuex application store. By using other libraries, it would be easy to store the application state and restore it later.

Dojo 2 encourages the use of unidirectional data flow, with the application state being set as properties on a top-level widget which propagates those properties to additional widgets. Widgets generate higher-order events for user input in which a controller would modify the application properties/state and the widget system reacts to those property changes. Dojo 2 discourages storage of state within widgets/components and has introduced the concept of meta providers to manage transient application state on behalf of widgets, being able to intelligently invalidate widgets to cause them to re-render.

The scaffolding of applications is accomplished via the CLI command dojo create app.

State management

The @dojo/stores package provides a state management approach for Dojo 2 applications, making it easy to use with large data sets, and leveraging the Container/Injector to bind that data to widgets. Existing state management packages such as Redux may easily be used directly with Dojo 2, by creating a binding between the Redux store and an application widget’s properties, leveraging the @dojo/interop package.

Data integration and persistence

The @dojo/core package provides the request API which is very similar to the Fetch API to be able to interface with RESTful services in an isomorphic manner.

Dojo 2 provides a fully-featured state management package which supports the concept of resources to enable developers to describe a complex application data model.

Service integration and orchestration

Dojo 2 does not currently provide any specific tools to streamline service integration or orchestration. Dojo 2 being a flexible and open framework though means that it should be straightforward to integrate other packages that might provide this higher order orchestration. Also, as the concept of a Dojo 2 application develops, there may be more opinionated ways of managing service integration and orchestration.

Offline

Full Progressive Web App support is being integrated into Dojo 2, which provides support for Service Workers to make it easier to enable offline support. Dojo 2 provides a more structured approach for persisting the application state to make it easy to allow offline support.

Ember

Approach

Ember.js has a very structured approach to building applications which follows an MVC architecture approach. Every application in Ember is one that is an instance of a class that extends Ember.Application. This manages the state of the application and coordinates the flow of the application.

Logically, an Ember application defines routes, which are managed by the Router. When these routes are navigated to, they affect the model and navigate to a template. A template contains components which can act as a View or a Controller for a model.

Ember also relies heavily on the concept of dependency injection, where the application contains a registry of different classes that make up the application and can instantiate and inject instances into the application.

State management

An Ember application will have an application store which contains the single version of truth for the application. Controllers interact with the Models of the application to handle changes and respond to user input to affect the state of the application.

Data integration and persistence

Ember Data standardizes on a set of conventions called JSON-API to provide easy integration with back-end data and services. Effectively JSON-API is a set of conventions to create RESTful services without the need to debate the implementation details. Ember Data adheres to these conventions.

Service integration and orchestration

Ember.js applications have higher order concepts of Services and the Run Loop which provide a framework for creating, scheduling and managing queues of long-running processes. This provides a level of marshaling of work to provide efficient scheduling of business logic. Ember uses the Run Loop to provide some internal work management, but it is also designed for application developers to manage asynchronous workflow.

Offline

There is no offline solution provided directly by the Ember.js project. There are several third-party solutions which make it possible to persist the application store in an offline state, as well other tools which make it easy to integrate offline functionality.

Aurelia

Approach

Aurelia focuses on providing a structured MVVM application framework to power single page applications. At the same time, Aurelia focuses on being a modular system which does not strictly enforce it being used in an MVVM way.

Aurelia applications bootstrap themselves, with the convention of a main module which exports a function named configure() which operates on the application object to set up the application. Like some server-side frameworks, the concept of plugging in functionality to the application via middleware is present. Aurelia promotes a fairly decentralized approach from that perspective, where different Views, built of Templates/Components are bound to ViewModels, which in turn interact with the application’s Models, navigating through the single page application via Routes.

Like Angular 2+ feels familiar to those who used Angular 1, Aurelia will also feel similar. The Aurelia team worked heavily with Angular 1 and joined the Angular 2 team, but decided to start Aurelia after there were divergent views of the direction of the projects. Aurelia goes further in embracing modern JavaScript and TypeScript, striving to avoid divergent patterns from the underlying language.

State management

Aurelia was originally designed to be an MVVM framework, meaning that two-way data binding between the View and the View Model allowed the View Model to act on the applications Models, which contain state. As the pattern of container state has become popular, Aurelia appears to have tried to pivot and support the use of Redux for application state. In the end, Aurelia tries to not be too dogmatic its about approach and focuses on providing tools which can create feature-rich web applications.

Data integration and persistence

Aurelia provides an abstraction to the Fetch API as well as the HTTP Client/XHR API. These appear to be the limit of Aurelia’s official support, depending upon third-party packages and solutions to provide further access to data.

Service integration and orchestration

As mentioned above, there are abstractions for Fetch and XHR, which would allow a developer to interact with services on a server, but there are no higher order concepts that are provided out-of-the-box.

Offline

There is a module for caching, but outside of this module, Aurelia does not offer an out of the box solution to supporting offline applications. There are a few open issues in the Aurelia repositories discussing support for Progressive Web Applications, though they seem to be focused on scaffolding a project, versus providing any higher order functionality to manage the application via service workers in an offline context.

Summary

Angular 2+

Angular 2+ does not provide a strict application model. It is possible to create a more formal application structure, like MVC, MVP, or MVVM on top of Angular 2+. It does rely on several very specific Angular ways of development that might make it awkward to integrate with other frameworks or libraries.

Most of the logic of the application is left to the end developer, with Angular 2+ focusing more on facilitating the tying of logic to components that are part of bigger views, which is very reminiscent of the form and control application model, where the user is provided forms with controls which interact with and create a directive application flow.

React + Redux

React + Redux are two tools that help build applications that heavily rely upon modern JavaScript constructs. Each tool is specifically designed to do their main purpose well and not much more. This is best suited for situations where there is a lot of knowledge in the development team of how to build web applications. When that knowledge is available, very effective applications can be built rapidly.

If there is not sufficient knowledge and technical leadership, React + Redux applications can quickly become unmaintainable and overly complex and fail to meet their business objectives.

Vue.js

Vue.js focuses on providing the ViewModel of an MVVM application. Adding Vuex provides an application state container similar to Redux. If you want to follow the pattern of MVVM, or you want to incrementally re-wire an existing web application to a more modern architecture, then Vue.js is likely to work for you.

While Vue.js focuses on providing a few key features, instead of an entire framework, it does not feel as open-ended as React + Redux, meaning that it can be safer to use with teams with less knowledge and technical leadership than is required with React + Redux.

Dojo 2

Dojo 2 provides a front-end to an application, which is flexible enough to be integrated with other tools to create a whole application. If you are looking for a more structured and opinionated reactive style solution, then Dojo 2 is worth consideration.

Ember.js

Ember.js adheres to a structured MVC model. It is strongly opinionated and has a breadth of concepts beyond application state management, to deal with an application as a holistic concern. Ember provides extensive documentation and a right way to architect and build Ember applications. If you want a framework for building full web applications, where patterns and anti-patterns are clearly defined and the MVC application model is aligned with your architecture, then Ember could be a strong candidate for you.

The risk is that there are higher order APIs that are built on top of the standards, meaning that Ember applications will likely stay as Ember applications and changing libraries or approaches is likely to mean wholesale re-writes down the road. This is, of course, a risk with any library chosen, but the breadth of functionality of Ember is more likely to lead to framework lock-in.

Aurelia

Aurelia was designed to try to support an MVVM application model, though it does not strictly enforce it. Aurelia is a flexible framework that will feel very familiar to users of Angular 1, with attempts to fully leverage modern JavaScript and TypeScript instead of introducing patterns that are specific to the framework. In theory, this makes Aurelia easier to integrate third-party solutions but also migrate to other patterns in the future.

Because Aurelia applications are easy to extend, it can be quite easy to end up with a sprawling application that has grown organically over time. Teams need to ensure that they establish conventions and patterns up front to help ensure that the application continues to be maintainable over the long term. Because Aurelia is flexible, it would be quite easy to end up with an application that is vast and complex and inconsistent in the way it operates.

Up next

Now that we have explored the core of what we are likely looking for in a framework, in the next post we will be exploring the situations for which each framework is best suited, and see if they have any magic up their sleeves for those use cases. We will find that while some bands are multi-genre and hard to pin down, some might only be good at playing one tune.

A significant amount of work on JavaScript toolkits and frameworks has centered around trying to fix, normalize, and optimize browser implementations. Doing so requires making many assumptions about what the problems are, how our tools will be used by developers, and what we expect of the future.

The assumptions made often turn out to be wrong. What’s worse is that these choices may prove to be correct for a very long time before coming back to bite us. During this period of blissful ignorance, toolkits can become tremendously popular and become a vital part of large, complex codebases.

Event Bubbling and Event Delegation

Event bubbling allows events originating from a child node to “bubble up” to its parents. This behavior led JavaScript developers to the loose design pattern of identifying the node we really care about receiving events from – which is typically written using CSS selector syntax – but adding the listener to a parent of that node.

Once this pattern made its way into toolkits, a number of assumptions had to be made when designing the APIs we have today, originally revolving around both performance and efficiency.

Event delegation is one of the defacto ways of doing event handling. But is it the right methodology for all projects? In fact, the better question might be whether the assumptions each toolkit has made are right for your project. Knowing whether an API is right for your project depends on knowing what assumptions these tools are built on and understanding how each toolkit has interpreted them.

Assumptions

Let’s look at some assumptions that might be made in thinking through how to efficiently manage DOM events.

The native event registration mechanism is too slow

Unless you can come up with a secondary reason for an API to exist, do not create a new API. With the effort browser vendors are putting into their run-times, it is all but guaranteed your implementation will one day be slower than the native implementation. At SitePen, we had a project that relied on the speed of an array splice. Even though we discovered in some cases that manually downshifting indexes and array length could result in a significant speed improvement, we had no way of targeting a specific browser, browser version, or platform as there was no way to do a run-time feature test to determine if our implementation was faster than the native API.

New native APIs will not emerge

Work very carefully to guarantee you’ve gathered enough information to fall back to a native implementation either as it exists now or as it could conceivably exist in a perfect world. Another term for this is “future proofing”. In some cases, you may end up with an API that has more required parameters than absolutely necessary, but if it guarantees an easy transition to a significantly better native API, do it. A good example of this is the eventual native support for querySelectorAll where browsers implemented an API natively that many developers assumed would never happen.

There is no performance penalty for uncommon use cases

Event delegation may manifest itself in several ways and the two outlying situations are a small number of events on a large number of nodes versus a large number of events on a small number of nodes. If you optimize the API for one of these two outliers, you may create significant bottlenecks for the other. With event delegation, while we may only ever have to add an event listener to a single node, a complicated method of identifying the nodes where callbacks should fire may have a disproportionate performance hit. This can be the case with event delegation when a large number of events are fired very quickly, such as mouse movement or scroll events.

DOM events aren’t always the result of user interaction – we also have synthetic, custom, and loading events.

Conditions and context

When considering event delegation, it is easy to think that we only need to concern ourselves with user interaction. This could lead us to assume that nodes are always part of a document and then ask, why wouldn’t we just add a single event handler to the document object? But DOM events aren’t always the result of user interaction – we also have synthetic, custom, and loading events. If the nodes we want to listen to are not in the document yet, but the main listener is on the document object, we will never be notified. And if it is unclear from the API that the listener has been added to the document and not one of the passed parameters, it can be baffling to understand why this is happening.

Abstraction is required

If a toolkit was to offer an API for event handling that only supported delegation – requiring both a parent node and a selector to identify child nodes – there would be no way to add an event listener directly to a node. Even requiring CSS selector syntax introduces higher-order functionality that could easily use another selector syntax or a simple function.

Side effects will not happen

As we saw above, DOM event bubbling allows the event delegation pattern to exist in the first place. But when you learn about what the full specification entails, you will see that event bubbling can be canceled. Your implementation may involve passing a custom event to the callback with a no-op stopPropagation method or you may just document that this can be a problem and limit the utility of your event delegation API. Both of these approaches have problems, but if you decide to do something like attach the event handler to the document object, it can amplify the side effects by adding a significant number of layers where the event can be canceled.

Timelessness

Once code has been written, it is tempting to “set it and forget it”. But with each year browsers improve in ways we cannot imagine or predict and the assumptions we had when the code was written may prove to have been wrong despite our best efforts.

Summary

Why are you choosing event delegation for your project?

Is the native implementation too slow? That’s unlikely in modern browsers.

Are there better APIs to perform event delegation? Not yet – if you need event delegation, this is a good pattern.

Does the toolkit’s performance optimization match what your project needs? If it’s focused on an outlier, it may not.

Is there something about the toolkit’s implementation that won’t work for your project? Read the documentation, it will usually be noted.

Are there side effects? You may not find this out until you run into a bug, so have it in the back of your mind.

Because all design patterns risk becoming anti-patterns as people learn them without learning the assumptions made during their creation, you should ask these same questions for any new tool you employ in your project. Be especially careful if what you are doing seems like it is cutting a corner. With care and thoughtfulness, your projects will be the shining monuments you know they can be.

We have previously discussed the look and feel of web frameworks. While we often become interested in a framework based on the stylishness of the widgets and applications it can create, this may lead to a similar approach to how we have historically selected music. Traditionally, you would go out, buy an album, maybe from a band you knew, with a great album cover and a list of interesting tracks.

Perhaps the album was currently #1 in its popularity on the Billboard charts? Maybe you even sample a few tracks while in the music shop. However, once you got home with your CD and played it over your kick-butt, valve amplified, highly optimized sound system, you find out that it was mixed by someone who thought that no one listening on an MP3 player through cheap headphones would ever notice the low sample rate and removal of the bass! Instead of feeling like you are in the middle of a concert, you feel like you are listening to a band playing in a toilet over a phone. So the album was optimized for its look and feel while ignoring the foundational architecture needed to create an album that scales under the demands of a highly optimized stereo system!

Web frameworks are not just nice Christmas gifts. Using the first one we unwrap will affect our applications for a long time.

In this post, we’ll look at how the different web frameworks deal with the fundamentals. We’ll look at what environments are supported, how well aligned they are with the current standards, how they future-proof their code, and how they go above and beyond the current standards. This analysis will give us an indication of what sort of foundations we are building on when leveraging a particular framework. Especially in the enterprise, web frameworks are not just nice Christmas gifts. Using the first one we unwrap will affect our applications for a long time.

Supported environments

Depending on our needs, we will obviously have a minimum set of requirements. However, even if a framework supports these requirements, how and why they support the different environments currently can tell us a lot about how they will support our future needs as well. If the framework is only supporting the “latest and greatest” then the framework is likely to continue to do that in future. If the framework tries to support too far back, then it raises questions about how efficient and performant it will be on newer browsers. Or does the framework try to find a sweet spot? Focusing on supporting a specific set of technologies and browsers that support those features? Does the framework focus on mobile, or is it more geared towards the desktop? Does the framework effectively support both client-side and server-side? Is the framework focused on solving a small set of problems extremely well, or does it strive to solve all problems, but with perhaps a bit less rigor in some areas?

Alignment to modern standards

The web platform has advanced greatly over the past few years and is likely to continue at a rapid rate of change. Trying to determine how aligned a web framework is to current standards can give us an indication of how that is going to work in the future. This is not just about supporting recent ECMAScript standards and current CSS standards, but moreover how the framework deals with the myriad of living standards that comprise the modern web platform.

There is also a double-edged sword with this. Some frameworks adopt technology early before it has progressed through the standards process. In this, be wary of a framework that depends on technology that may never officially land in the framework. This can cause issues for your application in the future if they need to break compatibility to re-align and/or could create performance concerns if there are more performant alternatives implemented.

Functional enhancements

Conversely, if a framework waits to only support fully standardized features, it may not provide the full set of features needed to build modern applications. Even with quickly evolving standards, it is likely that the web platform does not provide the total foundation of functionality that is needed. While this was the main focus of frameworks of yore because the web platform didn’t offer enough high-order functionality for developers to be productive, that has largely changed over the past few years. This doesn’t mean that there aren’t still things that should (and will) be improved, so the approach to dealing with these types of enhancements can have an impact on us as users of the framework.

Forward compatibility

Does the framework give any consideration to helping the framework maintain forward compatibility, as well as providing us, as users of their framework, with anything that helps us from having to rewrite our code every 6 months when the latest and greatest feature becomes part of the web platform?

i18n and l10n

Internationalization (i18n) and localization (l10n) are important in many use cases for a web framework. Many organizations often think that they do not need these features, but with the internet being global, organizations can often find themselves having to rewrite significant amounts of their applications because they are expanding into a new territory.

I18n is the concept of taking a resource, usually text, from one language and providing it in another language. L10n expands upon that and starts to deal with the higher order concepts that are above just translating words. Some cultures have different ways of expressing plurals, or how they group numbers. For example, 10 million in US English is 1 crore in India, and having the tools that can help support this sort of localization can be critical to have an application that meets the needs of a particular market.

Jump to:

Angular 2+

Supported environments

Angular 2+ targets support for Internet Explorer 9 – 11, current versions of Firefox, Chrome, Edge, Safari, iOS, and Android. It also supports IE Mobile 11. It achieves this support by expecting the implementer to ensure that polyfills are available in the environment. These can be flowed into the build process by a bespoke build pipeline. Historically Angular is quick to drop old browser support.

Animations are not supported in Internet Explorer 9.

All supported platforms are included in the Angular 2+ self-test suites.

Alignment to modern standards

Angular 2+’s internals are written in TypeScript and tend to embrace the ES6 syntax where appropriate. Angular 2+ also relies heavily on some future standards, like decorators and ES observables (via RxJS). Angular 2+ embraces the ES6 module syntax but does rely upon additional significant modifications and metadata to make the modules and classes work as part of the ecosystem. This additional metadata, while added in a manner that is likely forwards-compatible, means that there is a bit of lock-in to the Angular 2+ ecosystem. These modules are referred to as NgModules. While Angular 2+ adopts decorators and ES observables, it internally has not yet adopted async/await pattern for asynchronous code.

As far as taking advantage of TypeScript features, most of the effort with Angular 2+ feels focused on the authoring of the framework. For example, TypeScript types and other benefits do not flow through to the developer experience. The use of TypeScript does allow developers to adopt ES6+ syntax and other functionality with polyfills.

Functional enhancements

Most of the functional enhancements lie in the API surrounding the templating functionality of Angular 2+. While this is potentially a development strength, this does not necessarily flow through to the development experience. One of the advantages of using something like TypeScript is to provide Intellisense/auto-complete and design time validation. Angular 2+ achieves this for templates via a language services extension that is available at least in Visual Studio Code.

Angular 2+ uses core-js to provide ES6 functional polyfills. It uses RxJS to provide a syntax that is very similar to the proposed ES Observables and higher order functionality on top of observables. Angular 2+ also uses zones to provide an execution context across asynchronous code.

Forward compatibility

Because Angular 2+ is written in TypeScript it provides a level of syntactical future proofing. It also relies upon a set of standards-aligned polyfills. Both of these will likely help code you write today be forward compatible. Angular 2+ does rely heavily on its frameworks APIs. The long-term compatibility of those APIs is tied to the roadmap of Angular 2+. The Angular 2+ team does state though that they do not plan to have further compatibility breaks like they had with Angular 1 to Angular 2.

Angular 2+ embraces ES Modules and a component architecture. This helps by separating the code that could be more easily refactored and tested without impacts on the entire code base. Because Angular 2+ relies on dependency injection, which does not rely upon concrete interfaces or types at design time, this could cause regressions when upgrading an application and that would not be identifiable at code design time.

i18n and l10n

Angular 2+ provides an i18n API as part of its overall framework. This is accomplished by adding an i18n attribute to a template, along with an optional description of the intent. The Angular 2+ CLI tool will extract these translation bundles during the build process and stub these out so various translations for different locales can be provided. There is no specific API to localize things like date, currency, plurals and other language constructs. Using i18n requires what is typically the optional use of SystemJS which allows support for loader plugins which can then dynamically load the translations. Translation files are heavily dependent on the build process. Therefore extending the translations will often cause you to rebuild your application. Many Angular 2+ users have turned to third-party solutions for better i18n and l10n such as Globalize or i18next

React + Redux

Supported environments

Definitive browser support for React + Redux is complicated, as they are both libraries that would likely form part of a larger application stack, which may or may not have the same support matrix. ReactDOM, the core of the DOM interactions with React is designed to support Internet Explorer 9 and beyond, which would include current versions of Edge, Safari, Firefox, Chrome, iOS, and Android. Redux should work with any browsers that support ES5+, which is the same support matrix as ReactDOM.

React also has mature concepts around server-side rendering and appears to support Node.js 0.10.0+, though it is difficult to find an exact Node.js support matrix.

Alignment to modern standards

React + Redux both embrace modern standards. They both generally assume the end developer will author source code in ES6+ and use a transpilation tool to down emit if required. Traditionally Babel has been used as the transpiler, though TypeScript plus core-js provide a similar set of functionality. There are guides on how to use React without ES6 support, but you will quickly realize this becomes increasingly challenging.

React + Redux often promote future standards as well, with Facebook adopting these into their code and then championing them within the standards groups. For example, React adopted the spread/rest operators for objects, while that functionality is still working its way through the TC39 standards process.

Functional enhancements

React relies heavily upon JSX. JSX is a preprocessor step that adds XML syntax to JavaScript. While React can be used without JSX it is likely challenging as the larger community has embraced it and most examples use JSX.

There are a collection of other libraries that add what Facebook tends to consider important functionality or patterns. For example, Immutable.js (a library that provides immutable objects to promote uni-directional data flow patterns) and Flow (static typing for JavaScript to promote build time type enforcement) both provide functionality beyond the standards. Also, React + Redux includes dependencies that help provide a stable set of APIs they can build on.

React has run-time dependencies on some lower level libraries, mostly to ensure some standards compliance, but also some lower level environment detection. Redux uses Lodash to provide non-standard higher-order functionality.

Forward compatibility

Because React + Redux promotes ES6+ syntax, they should be largely forward compatible. They also encourage the use of ES modules to promote maintainability of code, and React has a component architecture. Because it is unlikely though that React + Redux would provide the entire set of libraries required to create a whole application, it increases the risk of forward-compatibility issues due to churn within any additional libraries used, likely leading to an increase in code that refactoring to stay current with community-driven best practices.

i18n and l10n

There is no direct support for i18n or l10n within React or Redux.

The most common way of dealing with i18n and l10n though is via the Yahoo! project format.js (a derivative of messageformat.js which provides integrations for React, Ember, and Handlebars, the templating engine used by Ember). format.js provides a robust set of APIs to not only deal with translations but also the localization challenges faced by many applications.

Vue.js

Supported environments

Vue.js targets support of Internet Explorer 9 – 11, and the current version of Firefox, Chrome, Edge, Safari, iOS, and Android. Some of the functionality requires polyfills/shims to be loaded.

The published full build of Vue.js will not work in CSP environments due to the use of dynamic JavaScript functions in its template compiler, but runtime-only builds that pre-compile templates into JavaScript functions can be used in CSP environments.

Vue.js can also run in a pure Node.js environment to allow for server-side rendering (SSR). Vue.js SSR should work with Node.js 0.10.0+.

Alignment to modern standards

Vue.js 2 is written using standard ES Modules and is authored in ES6+. Vue.js achieves legacy browser compatibility via Babel and focuses on webpack to provide build-time bundling of the ES Modules.

Vue.js 2 is written using Flow to provide static type checking of code, though Vue.js makes it easy for downstream developers to leverage Flow, TypeScript or just plain old JavaScript/ES6.

Vue.js’s structure and syntax is inspired by Web Components, but it does not use Web Components directly or use Web Component technologies like the shadow DOM or CSS scoping. Vue.js does provide similar functionality through single file components, which are, as the name suggests, Vue.js components defined entirely in a single file. A component file will contain an HTML template, JavaScript, and CSS. Components must be built using WebPack with a plugin that understands the .vue component syntax. The CSS can be scoped and extracted as part of the build process.

Functional enhancements

Vue.js’s API is fairly minimal and does not provide utility functions beyond those required to render and manage components.

Vue.js does not have any direct external run-time dependencies, though Weex, lodash, Babel polyfills, and various low-level APIs are incorporated during the build cycle.

Forward compatibility

Vue.js is built to run in any browser supporting ES5, which is likely to be most browsers for the foreseeable future. Components in scripts or HTML pages can be written using whatever version of JavaScript is supported by the base environment. Single file components will be built by properly configured WebPack, and support ES6+ features by default by integrating Babel into the build toolchain.

Vue.js made significant efforts to align to the Web Components standards, though found them lacking and slow to be implemented in browsers. It is hard to predict if Web Components will eventually become a stable enough set of standards to build upon. If that is the case, then Vue.js is likely to adopt it. That may require refactoring components, but Vue.js has done a good job of giving users an upgrade path where possible from previous versions.

i18n and l10n

Vue.js has no built-in i18n or l10n support, but a number of third-party libraries supply this, with specifically Awesome Vue being the most widely adopted.

Dojo 2

Supported environments

Server-side rendering in Node.js 6+ is on the current road-map, with limited capabilities already present.

Alignment to modern standards

Dojo 2 is authored in TypeScript, using full ES6+ syntax throughout. Instead of using global polyfills for ES6+ functionality, Dojo 2 leverages shim modules that generally do not touch the global scope and offload to native capabilities if the environment supports it. All of the provided shims are parts of published standards, except for ES Observables, which are still being considered as an ECMAScript standard.

Dojo 2 embraces class and method decorators as supported by TypeScript, though non-decorator ways of achieving the functionality are also provided. CSS is authored as CSS Modules in CSS 3+ syntax, which is transpiled during the build process to namespaced, down-emitted CSS.

Dojo 2 tends to embrace standards early that align with its design goals. It has adopted PointerEvents, CustomElements, Intersection Observers, Web Animations, async/await, dynamic import(), and rest/spread operators on objects.

Dojo 2 widgets can be exported to Web Components that then can be used standalone in other frameworks, only exposing an external API which is aligned to the Web Components standard. Dojo 2 can also directly use Web Components.

Functional enhancements

Functional enhancements, above standards, are located in @dojo/core which provides some higher-order functionality which Dojo 2 considers generally useful. This includes some abstractions for cancelable promises, requesting resources, etc.

Forward compatibility

Dojo 2 being authored in TypeScript a level of syntactical forward compatibility. Dojo 2 strongly encourages that downstream applications be authored in TypeScript as well, which will provide this same forward compatibility. TypeScript is how Dojo 2 achieves backwards compatibility, in addition to polyfills and shims.

While Dojo 2 can be used in an AMD environment, it is strongly encouraged to use Webpack 2 via the dojo build CLI command to build your applications, as this will help ensure any transition to future packaging and module loading does not impact your code and structure.

By embracing ES Modules as well as CSS Modules, Dojo 2 promotes patterns that help ensure that widgets express all their dependencies. By leveraging type enforcement across the code and CSS, Dojo 2 strives to provide a system which identifies issues at development/build time, before they are found at run-time. Changes upstream that are breaking downstream should be easily identifiable.

Dojo 2 fully breaks compatibility with Dojo 1 after more than 10 years of forwards compatibility. It is expected that future versions of Dojo will not be a complete break, and instead have smaller iterations over time.

i18n and l10n

Support for i18n and l10n is built into the widget system. By leveraging Globalize.js and the official Unicode CLDR data, Dojo 2 applications can detect and change locales, and developers can express translations and other localization information to be used with widgets.

The Dojo 2 build CLI is able to build locale translations directly into bundles or leave them to be dynamically loaded at run-time.

Ember

Supported environments

Ember targets support of Internet Explorer 9 – 11, and the current version of Firefox, Chrome, Edge, Safari, iOS, and Android.

Ember also has a mode called FastBoot which provides a path to server-side rendering. It supports Node.js 4+. Caution needs to be taken when introducing Ember addons with downstream dependencies that may not conform to the FastBoot environment, potentially breaking SSR.

Alignment to modern standards

Ember is geared towards supporting ES5 although Glimmer though is authored in TypeScript and embraces ES6+ syntax. Many users of Ember are starting to adopt modern syntax and use some sort of transpiler to provide backward compatibility.

Functional enhancements

With Ember 2, Ember stopped directly depending on jQuery internally for functional enhancements. It is still included by default and Ember provides a delegate to the jQuery APIs, but it can be wholly removed if desired. Ember has embraced Babel to provide standards-based polyfills as part of the build process.

Forward compatibility

Ember gives a lot of consideration for future compatibility, even providing a very viable upgrade path from 1.13 to 2.0.

Ember has stated that they are on a path to separate out their platform and make it more modular, breaking down functionality into separate packages, as the Ember team feels they have been accused of being too monolithic.

i18n and l10n

Ember does not provide an out of the box i18n solution. There are third-party solutions though, one called ember-i18n and the Yahoo! ember-intl which is part of Format.js.

Aurelia

Supported environments

Aurelia focuses on current versions of modern browsers: Microsoft Edge, Chrome, Firefox, and Safari. While Aurelia wants to support browsers like Internet Explorer 11, issues affecting these platforms do not seem to be a priority to fix.

Aurelia UX has the concepts of supporting the web as well as Cordova and Electron, though both Cordova and Electron appear to be a work in progress at this time. Aurelia has server-side rendering as a goal but is also a work in progress.

Alignment to modern standards

Aurelia is authored in ES6+ and designed to use Babel to transpile source code as needed. Aurelia has their own package of polyfills to provide backward compatibility as well as provide some future standards functionality. Out of the box, Aurelia does not directly deal with CSS3+, but allows preprocessors or postprocessors like postcss to be easily integrated.

Aurelia UX is authored in TypeScript and Aurelia provides a complete set of typings for their framework to support downstream development in TypeScript.

Forward compatibility

Aurelia’s aim is to be forward compatible with the web platform for two to three years in the future. Because it also supports downstream development in TypeScript, that is likely going to make it easier for downstream projects to achieve that goal. Aurelia uses Babel to transpile its ES6+ source code. Modern DOM APIs are used and polyfilled to work back to Internet Explorer 9, although IE9 is not officially supported. Aurelia leverages the Web Components standards internally and aims to support exporting UX components as standalone Web Components, though this is not yet implemented.

i18n and l10n

Aurelia provides an i18n library that leverages i18next under the hood. i18next provides both the ability to deal with translations and deals with some other l10n features, like dealing with plurals and provides a robust enough of an API to build some higher-order localizations like number formatting.

Summary

Angular 2+

Angular 2+ builds on a fairly decent foundation, though it has dependencies on a few larger projects that it doesn’t directly control. Using transpiling gives some forward compatibility safety and given its extensive use of metadata will cause a level of lock-in and required alignment to their architecture.

React + Redux

React and Redux are flexible tools and libraries. On their own, React + Redux does not provide a total application development solution. It is quite easy to collect additional libraries and add functionality and then wonder why your application has become bloated and problematic. In the right hands, you can provide an efficient and effective foundation to build on, but this requires real experience and skill. Both React and Redux already depend on external libraries to get to their foundation and it is quite easy to collect more junk. If you are sure you want to roll your own then React and Redux are both excellent libraries. They are not an out of the box framework.

Vue.js

If you are sold on the MVVM application model, it would be hard to not consider Vue.js. It provides a solid foundation of modern APIs while incorporating upstream dependencies into a coherent set of end developer APIs.

Dojo 2

Dojo 2 strives to provide a decent foundation and tries to minimize the dependencies it includes. Being focused on embracing TypeScript and modern syntax and functionality should give it a decent amount of growth room without having to be rearchitected. Dojo 2 really is most effective if you are willing to develop further with TypeScript.

Ember.js

Ember has a robust ecosystem and Ember.js really focuses on maintaining the ecosystem, making it easy for people to build on top of the foundations, creating a larger ecosystem. Ember does not provide as wide of a foundation as some other frameworks, so there is some risk that the functionality you need or want is provided via an add-on, of which the quality may vary, though the ecosystem seems large and fairly self-regulating.

Aurelia

If you like the concepts of dependency injection and a templating led framework, but something that is very much aligned to current and future standards, then Aurelia may be the right framework for you. While the foundations are mostly there, there are still parts of Aurelia that are a work in progress, though the direction and design goals seem to be coherent and an overall architecture that is sustainable.

Up next

We have looked at the album covers, read the liner notes, understood how the music is produced, but we really have not yet done what you would typically do with an album: actually sit back and listen to it. That is what is up next! We use web frameworks to build web applications, and in the next post, we will explore how the different frameworks deal with the concept of an application.

While instruments such as guitar and drums are part of a band, how they are used by the musicians define the style of the band’s music. Similarly, the elements of an application user interface connected together define the user experience. In this post as part of our ongoing series about frameworks, we are going to explore in depth the ways in which frameworks enable an overall UX design.

Many of us in software engineering look at UX design as a bit of a mystical art, populated by overtly creative people, wearing checked shirts, who get upset when the button is one pixel off from their original design, is just the wrong shade of pink, or does not have the right snap within an animation. For those in UX design, the software engineering teams may be thought of as the modern day construction workers, who never estimate anything correctly, and fail to build things as designed.

At the end of the design and development partnerships, the users of our applications have their own expectations, voiced and unvoiced, which we are trying to satiate. Our choice of framework has a significant impact on our ability to meet these needs.

Design ethos

Some JavaScript frameworks offer an overall design ethos, of how not only the UI should look and feel, but offer strong options of how a transaction is completed. Depending on your development needs, having a well designed and coherent design ethos will make it easier to rapidly provide an application which has a look and feel that is familiar and intuitive. In some cases though, your unique selling point might be the overall design of the application, providing a user experience that differentiates you in the marketplace. The ease of expressing a UX design would be of greater import to you when providing a differentiated user experience.

Customizing look and feel

Even if you adopt the supplied design philosophy, the ability to tweak the look and feel to meet your needs is likely to be a requirement. While we touched on this subject in the previous post in this series, it is good to revisit some of these concepts in the context of the entire user experience.

Design workflow

Materializing a design vision can be difficult, often with designers and software engineers speaking two different languages. How this workflow is achieved in practice will impact your delivery timelines. Do frameworks offer anything to make this process easier?

Jump to:

Angular 2+

Design ethos

On its own, Angular 2+ does not express any opinions on theming an application. The framework focuses on defining components from a code and template perspective and leaves the styling options to the developer.

The Material 2 project does encapsulate Google’s Material Design. Material 2 includes four pre-built themes and offers a framework for customizing and/or creating new themes.

Customizing look and feel

Assuming Angular Material is used, the look and feel of an Angular 2+ application can be customized by defining a custom theme with the SASS pre-compiler. Gesture support is provided by Angular Material’s applications by including HammerJS into the application.

Design workflow

Angular does provide guidance for how to design a user experience from the top down. Most of the documentation focuses on the implementation level. Angular components are HTML template based. HTML mockups with accompanying CSS could be a starting point for creating components, but many of the user interactions and relationship to the application likely would not be easy to express in an HTML + CSS mockup.

React + Redux

Design ethos

React focuses on the architecture of writing components and does not offer opinions on design aesthetics. There are no officially maintained libraries of React components. There are several third-party libraries that are built on top of React which do offer an opinionated design ethos.

Customizing look and feel

Styling React components is accomplished in a straightforward manner using either the className or style JSX attribute. Beyond providing these mechanisms to parse and apply styles and classes to rendered DOM respectively, React offers no mechanism to switch out component themes dynamically or to modify behavior based on device type.

Because React applications expect an ES6+ runtime environment and thus inherently rely on transpilation, it is very common to process a React application using a build tool like Webpack. By using Webpack, CSS class names can be imported into a React component and optionally localized using CSS modules; these imported class names can then be applied to specific nodes during the component’s .render() method:

Design workflow

A React + Redux application lends itself to a separation of concerns when it comes to design and development as well as any other JavaScript framework. Design teams can articulate their intentions in the form of mock-ups or static examples that use HTML and CSS. These mock-ups can then be translated into rendered DOM within a component and styled as necessary. Because there is no opinionated abstraction for themes, the long-term maintainability of the look and feel of a library of React components requires up-front thought and planning.

Vue.js

Design ethos

Vue.js does not express any design ethos. Its primary concern is with the structure of an application, not the look and feel. There are available third-party component libraries for Vue.js which express a design ethos.

Customizing look and feel

Vue.js provides an HTML-based template syntax and some convenience methods for managing CSS classes and inline styles on elements. However, the look and feel decisions for a Vue.js app are left entirely to the developer.

Design workflow

Vue.js itself is not concerned with the UX design process and provides no specific tools to help. Because its components are HTML template based, HTML mocks can be a starting point for the design process. Because there is no opinionated abstraction for themes, the long-term maintainability of the look and feel of a library of components requires up-front thought and planning.

Dojo 2

Design ethos

The Dojo 2 out of the box widgets are being built to a consistent user interface design. Dojo 2 has a default theme and design guidelines. The post Dojo 2.0 release has plans for at least two additional themes that can be adapted and tailored as needed.

Customizing look and feel

Dojo 2 is designed to leverage CSS modules. It also designed to leverage the postcss post processor that focuses authoring of modern CSS but down-emitting to ensure older browser support. This system allows the code to be tightly coupled with the styles it requires. The build tooling ensures that the required CSS is available at run-time and is namespaced to avoid class name collisions. The tooling also provides the necessary information to allow easy integration to the IDE. For example, you would author a CSS module and import it as you would any other JavaScript or TypeScript module, receiving code completion if using a TypeScript language services aware IDE:

Dojo 2 also offers a theming system, which creates the concept of themeable classes versus those that are structural and therefore fixed. When a theme is then applied to the properties of a themeable widget, the classes from the theme will be substituted. An example of creating a themeable widget:

Dojo 2 makes it easy to provide support for gestures and other user input events without the developer needing to know the details of the DOM event system.

Design workflow

Dojo 2 was specifically designed to make it easier to integrate different roles in the workflow. By providing a pattern for separating structural styles from thematic styles, it is possible for the look and feel to be created independently, but work in an integrated way at design time.

Dojo 2 does prefer a functional style for describing the DOM structure of a widget. Using this style means there is no direct path from an HTML + CSS mock-up to a widget. Dojo 2 does support TSX, the JSX extensions for TypeScript which would allow an HTML-like template to be embedded in a widget.

Ember

Design ethos

Ember.js components are built with Handlebars templates, so theming and styling are left entirely up to the user. There are third-party component libraries which provide components that embody a particular design ethos.

Customizing the look and feel

Class names can be set in the template when the component is invoked, or dynamically using a bound property. Ember.js focuses on a two-way data binding methodology, therefore attributes in the template are bound to the JavaScript object and bound to values in the application.

In 2017, the Ember project created Glimmer, which is separate from Ember.js. Written and available in Typescript, Glimmer uses ES6 class syntax and eliminates the sometimes-confusing Ember configuration object.

Design workflow

Theming with Ember.js is left entirely to the developer. Classes can be added within templates just as they would be in static HTML files, and style sheets are not integrated into the framework. There are though several third-party libraries that provide a design framework as well as user input abstractions like gestures.

Aurelia

Design ethos

While not a requirement for using Aurelia, Aurelia UX expresses an opinionated design ethos. It encourages encapsulating styles within an element and allows for data binding within styles. Aurelia UX has the concept of hosts (web, Cordova, Electron, etc.), platforms (web, iOS, Android), and design languages (Material Design, iOS Design). The host and platform are detected by Aurelia UX, and this information is all accessible from the component’s styles and can be used via the in-style data binding to tailor the component’s styles.

The Aurelia UX source code repository provides some basic components as well as the tools for providing additional styling and theming. There is an additional Aurelia UX showcase which highlights some of the functionality and components.

Customizing look and feel

Aurelia UX provides a solution for the look of components and the project team has plans to address the feel in a similar way, though it is currently a work in progress. According to the interaction, movement, and flow section of the Aurelia wiki, Aurelia UX will build on top of its components and add these higher-level features. However, while the patterns may still be in development, there is already the animator-css library for performing animations.

Here’s an example of extending the snippet in the UI section to make a themed widget

Design workflow

While it seems that Aurelia has not given specific considerations for a design workflow, it has a fairly robust system for supporting different contexts and designs within a single application. Aurelia also uses HTML templates and CSS, which can make it easy to adapt HTML + CSS mock-ups into components. The abstraction of the feel is still evolving and how that will work in practice is not completely clear at this time.

Summary

Angular 2+

If you like Google’s Material Design then Material 2 delivers on that with a fairly robust system for tailoring components and creating new ones. There is also an increasing number of third-party alternatives that can help you.

React + Redux

React is far more focused on being a toolkit and does not provide a higher order framework for UX design. There are several third-party libraries, but with varying degrees of maturity. If you are building your own UX design and have the engineering skills to build it properly, then React can be a tool to help render that on the screen.

Vue.js

Vue.js focuses on the application, often used in situations where there is an existing UI/UX that needs a modern application framework to power it. There are quite a few third-party component libraries that provide a fairly complete UX design and additional abstractions to make it easy to maintain. There are also no real limitations to building your own component library.

Dojo 2

Currently, Dojo 2 provides some strong abilities for creating and managing the look of components and the feel aspect is under development. There is an intent to make it easy to create and manage reusable libraries of components and provide the systems and patterns for managing the UX design, though that vision is yet to be fully delivered.

Ember.js

Ember.js is focused on the application. There are a significant amount of third-party components, but without an opinionated way to manage the look, integrating these components into a coherent UX design can be challenging. There are some larger libraries of components as well as libraries that allow expression of themes and user input management. Like some of the other frameworks, if the Ember.js application framework is for you and you want to create your own UX, then you will find many options to accelerate your efforts.

Aurelia

Aurelia UX provides an existing UX design as well as an advanced set of tools that allow expressing the look of components. The Aurelia UX team have expressed their intent to mature the feel aspects and Aurelia is one of the more advanced frameworks that has identified the challenges of dealing with a UX design and how to manage it in practice.

Up next

Now that we have wandered down the aisles of the local web framework shop, and may be narrowed down how we want things to look and feel, we need to go back to the basics. Especially with the rapidly changing web platform, we need to look at the frameworks in context of how they support the standards, what sort of foundational APIs they supply, and how they help make our code future proof.

…we would all be using justin-bieber.js. We as an organization have been working with JavaScript since 2000. We have seen frameworks rise and fall, including being responsible for some of them. We have seen trends come and go. We have seen browser dominance ebb and flow. We have seen winners and losers. We have seen JavaScript go from an obscure simplistic scripting language to the language of the internet. Through all of this, we have experienced and learned a lot. We would like to share some of that with you.

Probably the most common question we get asked as we get to know an organization is “What framework should I use?” No matter what some people would have you believe there is no straightforward answer. The answer though is founded in our typical response of “What are you trying to do?”

Through this series of blog posts, we are going to try our best to give you a framework for choosing a framework (how meta is that?) and along the way, we are going to try to provide some analysis of several of today’s leading frameworks. But first we need to establish some common language between us.

What is a web application?

This is one of the things that has changed dramatically over the last 20 years. In the nascent days of the web and web applications, this usually was a form which posted information back to the server. In many cases, the form was generated HTML from the server, with a little bit of JavaScript to make it interactive. The biggest problem was that it was extremely difficult to get a rich experience, like that of a thick client application. If you needed a rich experience, you would often have to incorporate things like Java applets, Flash, or Silverlight into your application. Many enterprise web frameworks were nothing more than a way to send thick clients over HTTP and display them in a browser.

Today though, there is virtually no limit to what can be done via JavaScript, a web browser, and related technologies. From virtual reality to real-time media, all tightly coupled to the hardware; all of this is available as part of the web platform. There has also been a significant shift on the desktop, where there’s a continued desire to leverage the same skills for development, enabling fully native desktop experiences. On mobile, we see an increasingly fuzzy line between native and web experiences, with a huge focus on making sure the web technologies deliver full interactive experiences that meet the performance expectations of the user, irrespective of the power of their device or the speed of their connection.

So what is a web application? In one of posts in this series, we will delve deeper into the common use cases we see and how the different frameworks work with them, but for now, let’s consider a web application anything that at run-time uses the three core web technologies of HTML, JavaScript, and CSS.

What is a framework?

As much as the web platform has changed over the last 20 years, so have the needs of tools we use to build the applications. For many interesting historical reasons, HTML, JavaScript, and CSS don’t actually fit together very well. Increasingly we are seeing synergies among the standards bodies that define these technologies, but that hasn’t always been the case. JavaScript was designed in two weeks originally as a lightweight scripting language, not a language to power the ubiquitous web of connected devices we now live in. It gained its throne, not on its merits, but by a quirk of fate.

For many interesting historical reasons, HTML, JavaScript, and CSS do not actually fit together very well.

Since JavaScript was delivered, people have been trying to fix JavaScript. If they were not trying to fix JavaScript, they were trying to make the APIs that allow access to the HTML better (the Document Object Model or DOM). These, of course, all come with someone’s opinion of the right way to build applications.

Early on, it was a wild west of browsers, with each browser trying to vie for slightly unique selling points (USPs) that they would hope would lead them to dominate the internet. This provided a set of unique challenges for those developing web applications, as each USP was a way in which your application would not work as expected in another browser, and for those who could not force a particular version of a browser on someone (which would be every customer-focused business entity in the world!), it was a landmine that would and could frustrate your users. So early on, the likes of Prototype, MooTools, Dojo 1, jQuery, and others really focused on trying to level the playing field, so that you could actually write code that worked in every browser.

As the web matured, web applications became more complex. It became far more than stringing together some web forms. User expectations started to increase towards richer UIs delivered over the web. At the same time, the browser market shifted away from Internet Explorer to the likes of Chrome, Firefox, and Safari. Some of the previous toolkits, like Dojo 1, grew into this rich UI space as well as the rise of commercial solutions like ExtJS. This continued to evolve over several years with the introduction of Backbone, Ember.js, and eventually Angular 1. Libraries that started as collections of browser patches and APIs became application framework platforms.

Over the past couple of years, we have seen the rise of the most mature set of frameworks. A virtual cornucopia of options. By and large, these all focus on creating extremely rich UI/UX experiences that work across the gamut of devices from desktop to mobile. Some frameworks provide highly structured ways to build applications and some focus on solving just a single problem, expecting to be incorporated into other frameworks to build a whole application.

So, for this series of blog posts, we are going to consider a framework as any significant solution that, at run-time, uses idiomatic JavaScript and the other technologies of HTML and CSS. While it is impossible to have an exhaustive set, we are going to focus on the following:

Angular 2+

React + Redux

Vue.js

Dojo 2

Ember.js

Aurelia

What should I look for?

We will start that answer with a question: “What do you need?” While we are going to dig into as many aspects of these frameworks as we can, our analysis will not be exhaustive. Part of what drives this plethora of choice is the wide variety of different needs. We would even say that it might be folly for any large enterprise to enforce a one size fits all mandate when it comes to frameworks. While there are synergies that come from sharing code and having similar skills among development teams, most businesses these days have a variety of needs, from consumer-facing websites to inward facing business applications. No framework works best in every use case.

Your job, as the reader, is to come up with the checklist of your needs, which should be driven by what you want to accomplish. Some questions to consider about what you need which can provide context as you go through this series:

What type of web application do I need to build? Is it a consumer-facing website? Is the web application a consumer product? Is the application a business application which will be used by an expert user who knows a lot about the macro business process? Will the application be displaying a lot of data and need to visualize it in an actionable way?

What environments will this run in? Will it run on mobile? Do I need a native experience on mobile? Will I be able to control the browser set on which my application is run?

What sort of user interactions will I need to deliver? Will I be comfortable with off the shelf user elements or are the user interactions actually a key component of the value that my application provides? Is my application just part of a larger federated set of web applications, where I need to have a seamless user experience?

How much of my application will be client-side? Will my business logic reside within the web application or will I need to interface to server-side systems which will drive the application? How much service integration and orchestration will I need to do within my web application?

What is my development environment going to be like? How integrated will my front-end team be with my server-side team? Is it full stack development? What will be the workflow between the product owners, the UI/UX designers, and the software engineers? What other technologies will I need to integrate into my application?

How am I going to know my application will work as expected? How do I want to test it? What is the ongoing lifecycle of the application? Will it be handed over to a support team or will the original engineers continue to maintain the application in perpetuity?

How much do I expect this application to take from the open source community and am I prepared to give back to the community? Will I have the right level of skills and experience to deal with issues when they occur? Who will I turn to when we get stuck?

Major Areas

Now that you have some of the questions you need to answer, we want to take you through a journey of looking at the aspects of the above frameworks through several different lenses. We plan to cover the following areas in this series:

User Interface Development – The foundation for most web applications is, of course, the user interface and we will discuss several aspects of user interface development for the selected frameworks.

User Experience Design – A web application is a lot more than just a bunch of user interface elements plopped on a web page. Web applications need to fit the user ergonomics of your target user. How do you achieve those with your web application is an important consideration.

Foundation Technologies – It is important to understand the core of the framework. How are the frameworks built, what do they try to accomplish as far as ensuring web applications built with them work properly in different environments? How forward-looking are these frameworks as well, to help ensure it will not be outdated too quickly.

Applications – Once you have a user interface, you need business logic to drive it. How do the frameworks provide this and how rigid and flexible are those concepts?

Usage – Be it mobile first, expert-user business systems, consumer applications, or content management, all the frameworks deliver on these use cases in different ways and have varied strengths. Understanding where the frameworks excel (and maybe where they do not) will help in deciding what is best for you.

Integration – It is unlikely your web application will be fully self-contained. It will need to interface with different systems. The approach each framework takes to this is an important consideration.

Testing – How do each of the frameworks approach testing as well as what is a good testing approach?

Soundness – How well does the framework promote patterns and use technologies that help ensure well designed and behaving applications.

Building – What does the tooling and build pipeline look like for each of the frameworks?

Community – What is the community around each framework? Are those skills available on the market? Who do you turn to when you need help?

Let’s go

Popularity is not a good barometer for web application frameworks. This is because they all meet different needs in different ways. Having a good understanding of what you need will help you determine which framework is right for your particular application. So hopefully you are strapped in and ready to embark on this adventure together as we strive to explore the depths of web applications frameworks!

As we noted in our post about the open and incremental approach to TC39, one of the challenges facing TC39 is that it has grown in size substantially along with the community interest in JavaScript. ECMA has started to address this by creating a few additional standards bodies for sections of JavaScript that can be decoupled from the core language.

ECMA-414

ECMA-414 is the meta-specification for all current and future JavaScript standards groups. It currently contains references to three standards and the internal ECMAScript test suite.

ECMA-262: JavaScript

ECMA-402: Internationalization

If you have ever worked on creating an internationalized or localized application, you quickly learn that language translation, currency, dates, and other features are far from simple. Internationalization is sufficiently complex that it was wisely split off into the ECMA-402 standard. It is refined through the same open process via GitHub.

ECMA-404: JSON

The JSON data interchange format or syntax is part of ECMA-404. While the syntax is valid JavaScript, its rise in popularity as a data format makes it a good candidate for a separate standard given the need to support it within almost any programming language today.

Summary

The JavaScript standards group seems to have learned from some of the challenges of CSS and XHTML which created so many modular standards that it was at times difficult to keep track of all of the modules. By keeping the core language features in one standard, but looking for things that are fairly orthogonal to the language, they’ve done a solid job of separating concerns where it makes sense to help needed ancillary features of the language iterate somewhat independently of TC39. That said, there is still quite a bit of communication between the groups in charge of each of these standards so they do not diverge in their approach.