Ten years ago, increasing the performance of a website usually meant
tweaking the server side code to spit out responses faster. Web
Performance engineering has come a long way since then. We have
discovered patterns and practices that make the (perceived) performance
of websites faster for users just by changing the way the front end code
is structured, or tweaking the order of elements on a HTML page.
Majority of the experiments and knowledge has been around delivering
content to the user as fast as possible.
Today, web sites have grown to become complex applications that offer
the same fidelity as applications installed on computers. Thus,
consumers have also started to compare the user experience of native
apps to the web applications. Providing a rich and fluid experience as
users navigate web applications has started to play a major role in the
success of the web.
Most modern browsers have excellent tools that help measure the
runtime performance of websites. The Chrome Devtools features a powerful
Timeline panel
that gives us the tracing information needed to diagnose performance
problems while interacting with a website. Metrics like frame rates,
paint and layout times present an aggregated state of the website’s
runtime performance from these logs.
In this article, we will explore ways to collect and analyze some of
those metrics using scripts. Automating this process can also help us
integrate these metrics into the dashboards that we use to monitor the
general health of our web properties. The process usually consists of
two phases – getting the tracing information, and analyzing it to get
the metrics we are interested in.

Collecting Tracing information

The first step to get the metrics would be to collect the performance trace from chrome while interacting with the website.

Manually recording a trace

The simplest way to record a trace would be to hit the “start
recording” button in the timeline panel, performing interactions like
scrolling the page or clicking buttons, and then finally hitting “stop
recording”. Chrome process all this information and shows graphs with
framerates. All this information can also be saved as a JSON file by
right-clicking (or Alt-clicking) on the graph and selecting the option
to save the timeline.

Using Selenium

While the above method is the most common way to collect tracing from
Chrome, doing these repeatedly over multiple deployments or for
different scenarios can be cumbersome. Selenium
is a popular tool that allows us to write scripts that perform
interactions like navigating webpages or clicking buttons and we could
leverage such scripts to capture trace information.
To start performance logging, we just need to ensure that certain parameters are added to the existing set of capabilities.
Selenium scripts have binding for multiple programming languages
and if you already have selenium tests, you could add the capabilities
in the following examples to also get performance logs. Since we are
only testing Chrome, running just Chromedriver instead of setting up Selenium and then executing the following node script gets the trace logs.

The script above tells selenium to start chrome with performance logs,
and enables specific trace event categories. The “devtools.timeline”
category is the one used by Chrome devtools for displaying its graphs
about Paints, Layouts, etc. The flags “enable-gpu-benchmarking” expose a
window.chrome.gpuBenchmarking object that has a method to smoothly scroll the page. Note that the scroll can be replaced by other selenium commands like clicking buttons, typing text, etc.

Using WebPageTest

WebPageTest also roughly relies on the performance log to display metrics like SpeedIndex. As shown in the image, the option to capture the timeline and trace json files has to be enabled. Custom actions
can also be added to perform actions in a WebPageTest run scenario.
Once WebPageTest run finishes, you can click download the trace files by
clicking the appropriate links. This method could not only be used in
WebPageTest, but also with tools like SpeedCurve that support custom metrics.

Analyzing the Trace Information

The trace file that we get using any of the above methods can be loaded in Chrome to view the page’s runtime performance. The trace event format defines that each record should contain a cat (category), a pid (processId), a name
and other parameters that are specific to an event type. These records
can be analyzed to arrive at individual metrics. Note that the data
obtained using the Selenium Script above has some additional metadata
for every record that may need to be scrubbed before it can be loaded in
Chrome.

Framerates

To calculate the FrameRates, we could look at events by the name DrawFrame.
This indicates that a frame was drawn on the screen and by calcuating
the number of frames drawn, divided by the time for the test, we could
arrive at the average time per frame. Since we also have benchmarking category enabled, we could look “BenchmarkInstrumentation:*” events that have timestamps associated with them. Chrome’s telemetry benchmarking system uses this data to calculate the average frame rates.

Paints, Styles, Layouts and other events

The events corresponding to ParseAuthorStyleSheet, UpdateLayoutTree and RecalculateStyles are usually attributed to the time spent in Styles. Similarly, the log also contains Paint and Layout events that could be useful. Javascript time can also be calculated using events like FunctionCall or EvaluateScript. We can also add the GC events to this list.

Network Metrics

The tracelogs also have information related to firstPaint etc. However, the ResourceTiming API and the NavigationTiming
APIs make this information available on the web page anyway. It may be
more accurate to collect these metrics directly from real user
deployments using tools like Boomerang, or from WebPageTest.

Considerations

There are a few things to consider while running the tests using the methods mentioned above.

Getting Consistent metrics

Chrome is very smart at trying to draw the page as fast as possible
and as a result, you could see variations in the trace logs that are
gathered when running tests for a scenario twice. When recording the
traces manually, human interaction may introduce differences in the way a
mouse is moved or a button is clicked. Even when automating, running
separate selenium scripts occurs over the network and executing these
scripts is also recorded in the logs.
To get consistent results, all this noise needs to be reduced. It
would be prudent to not combine scenarios when recording trace runs.
Additionally, running all automation scripts as a single unit would also
reduce any noise. Finally, comparing tests runs across multiple deploys
is even better since both these runs would have the same test script
overhead.

Trace file size

The trace file can take up a lot of space and could also fill up
Chrome’s memory. Requesting for the logs from ChromeDriver could in a
single, buffered request could also result in Node throwing out of
memory exceptions. The recommendation here would be to use libraries
like JSONStream
to parse the records as a stream. Since we are only interested in
aggregating the individual records, streaming can help consume scenarios
that are longer and hence take more memory.

Browser Support

The performance logs are available on the recent versions of the
Chrome browser, both on the desktop and on Android. I had worked on a commit to make similar trace logs available for mobile Safari. Hybrid applications using Apache Cordova use WebViews which are based on browsers. Hence, this method could also be applied to Apache Cordova apps for Android and iOS.
Note that these performance logs started as being Webkit specific, and hence are not readily available for Firefox or IE yet.

Conclusion

All the techniques described above are packaged together in the browser-perf
package. Browser-perf only adds the extra logic to smooth out the edge
cases like the scrolling timeout, or ability to pick up specific
metrics, or processing the trace file as a stream. I invite you to check
out the source code of browser-perf to understand these optimizations.
These metrics can be piped into performance dashboards and watching
out for these metrics should give us an idea of general trends of how
the web site has been working across multiple commits. This could be one
small step to ensure that websites continue to deliver a smooth user
experience.

React's fast reconciliation algorithm one of the key pieces that enables an entire web application to be re-rendered on every state change in a performant way. This "diff-ing" logic compares states between the original and changed virtual DOMs and results in an efficient minimal set of instructions to modify the actual DOM on the browser.
In an earlier post, I had run experiments to compare the rendering performance of different JavaScript frameworks. React was one of the faster JavaScript frameworks and the tests showed an interesting insight into how React works during a render cycle. While other frameworks are busy with paints/layouts or modifying the DOM nodes, React spends most of its time in Javascript events, possibly running the diffing algorithms. For a smooth website running at 60 frames per second, each frame needs to be computed in less than 16ms. Translating this into React's case, each reconciliation and subsequent DOM modification should happen in that short time frame.
Recently, ReactNative has also showed how separating the UI thread from the Virtual DOM computations can make applications fast. I wanted to see if we could achieve similar performance gains for web pages by making the web UI DOM updates backed by a parallel DOM-diffing computation set that happens in a separate thread (aka web worker). An open issue about this idea exists in the React issue tracker and I hope to present data that could validate (or invalidate) the hypothesis that web workers would make React even faster.
The resulting solution should also have the constraint that it could be a drop-in replacement in the usual react-dom that runs in the main thread.

Demo

Here is a demo of the DBMonster application rendered using Web Workers, compared to normal React rendering. Each case shows the frame rates with the number of DOM nodes being rendered increasing.

As the video shows, the effect of the web workers starts to show as the number of nodes increases. You can run the sample app using the instructions in the README file or look at the gh-pages to see it in action.
The repository also contains the custom renderer that is based on react-blessed and react-titanium. This custom renderer can be tried out in other projects. Note that it does not yet implement events.

Observations

Batching
The app tested may not be a typical ReactJS app, but it does stretch the Virtual DOM diffing to its limits. While it may seem that separating UI thread and the Virtual DOM computations should always make the process faster, there is a cost associated with the postMessage call that is used for communicating between the Web Worker and the UI thread. Infact, this cost is significant enough to make the process much slower if we called "postMessage" for every DOM mutation.
Batching these calls seems to help a lot, and I ran a couple of experiments to see how different batch sized may impact the speed of updates.
All the frame rates was collected using browser-perf.

In the graph, y axis represents frames per second, while x axis is the number of nodes rendered. Each line indicated the batch size - eg. batch-1000 means that every 1000 DOM modification messages were sent per postMessage call.

While calling postMessage on every call is slow, making the batch size too large also seems to be bad. Increasing the batch size to a very high number increases the number of DOM operations sent to the UI thread, and this high number of DOM operations slows down the render speed.
The best approach seems to be a hybrid one. We could create a feedback loop that would tell the worker, the time it took for a batch of DOM updates to run. This information would in turn be used by the worker to estimate the number of updates in the next batch, thus trying to achieve a situation where do not take more than 16ms per batch or leave any free time on the table.
This strategy could fail when the reconciliation algorithm generates more updates than the DOM can handle, creating a backlog. One way to fix this problem is to discard old render calls corresponding to previous states since React needs to render the app in its latest state only.

Multiple Top Level Components
Most real applications have only one top level component. Out of theoretical curiosity, I ran this experiment with multiple top level components to see if more that 2 workers are better than one. The theory is that the VirtualDOM tree could be split and the "diffs" calculated concurrently.
The single UI thread to which all the DOM updates are sent eventually becomes the bottleneck and careful batching and discarding old renders seems to be the only way such a parallelization could help. Here is a graph that show how it all shaped up.

In the legend, Normal-2 indicated normal react-dom with 2 top level components while worker-3 indicated web workers with 3 top level components.
As shown in the graph, 3 top level worker-based updates are slower than 2 top level worker based updates. Also, 3 normal react updates are not as bad comparatively.

RoadMap

For the purposes of this experiment on performance, I have not yet implemented events. For any real app to use this renderer, we would need events. Events are tricky since they are not processed synchronously and thus cannot call methods like "preventDefault" or event propogation. Events will have to be treated like how they are with React Native.This renderer would definitely work better with project like react-native-web where the concepts of events is closed to what this project has.
I have only implemented a based ReactComponent and would also need to implement other more complex components that React-Dom implements. Finally, this renderer also needs a better way for distribution since there are now 2 separate files that have to be included in the application.

Conclusion

The experiments above show that while Web Workers have potential, they may only benefit the apps that have many nodes that need to be compared and result in a smaller number of nodes that need to be changed. I believe that converting React to workers may not benefit all classes of applications as much, but having a separate renderer that uses web workers would definitely cater to the classes of applications that have a lot of components.

One of the biggest benefits of using Cordova for writing mobile apps is the ability to use the same app to target multiple platforms like iOS and Android. To ensure that the apps work well on all the supported platforms, the test matrix needs to have many devices to test the app on.
Most development shops usually have something like a "device wall".

This is usually a range of phones and tablets connected to computers via USB cables, mounted on a wooden board of a wall. Setting this common infrastructure or maintaining it is usually not easy. While they may be great to run automated tests, manual testing basically means that a tester has to interact with each device individually - a process that is time consuming.
Over the weekend, I hacked together a solution that creates a "virtual" device wall. It currently supports Cordova apps and I plan to extend it for use with ReactNative. Here is a video, showing off the capabilities of the virtual device wall.

Using the virtual device wall is simple - simply install the node module called virtual-device-wall in the same directory as your Cordova project. Then run node_modules/.bin/wall from the command line, as described in the project's README.
This command builds the Cordova app, uses the browser-sync capability from the cordova-plugin-browsersync and uploads the app to appetize.io. Once on appetize, the live screens of phones are embedded into a webpage that serves as the device wall.

While this is just an initial prototype with four static devices, the idea can definitely be extended to let the tested add more devices dynamically, or play with other cloud hosted services. You could even hook up your continuous integration system into this system to deliver the "virtual device wall" to your testers for every commit.
The prototype is open source and is hosted on github. You can try it out using npm, or following instructions on the project's home page.

If you think this is a cool idea and would like to try it out, send me a ping and I would love to help you out. There are a bunch of features (like more devices, CI integration, etc) that I am planning to add, and I would love to prioritize what I do first :)

As I am starting to use ReactNative to write non-trivial, production facing apps, I realized that I needed solutions that would help me share beta versions of my app, get user feedback and analyze crash reports. For native apps, I had used HockeyApp and loved it for its simple API and multi-platform support.
My workflow with HockeyApp for native apps was pretty simple.

Beta testers use the HockeyApp for iOS or Android to download this beta app, and test it.

HockeyApp SDK was integrated into my app for getting feedback, analyzing crashes, or notifying any changes that I make.

I wanted similar capabilities for the ReactNative apps that I build. In my last post, I had written about the Cordova Plugin Adapter for React Native, and I was able to use that to integrate the HockeyApp SDK in app. I used this cordova plugin.

Given that the cordova plugin adapter did most of the setup work, I did not have to manually add permissions or modify my activity as specified in the docs. I just had to do

Then I simply require('react-native-cordova-plugin'); and started using the API as below

Initialize the plugins using cordova.hockeyapp.start(success, fail, token) on componentDidMount.

cordova.hockeyapp.checkForUpdate(success, failure); to check if there are new versions. I ran this on startup of the app, and also had a button to refresh the app.

cordova.hockeyapp.feedback(success, failure); opened the screen for users to provide feedback about the app and was hooked to a button.

For testing, I also used the cordova.hockeyapp.forceCrash(); API to simulate crashes. There was a bug in the app that made the app crash on a beta tester's phone and I did get the crash reports.

The entire code showing each of this functions is available in this gist. I also created a demo video showing the APIs in action.

I was only able to get the HockeyApp working on the AndroidSDK since the react-native-plugin-adapter only works on Android for now. If you would like a version for iOS too, or would like to try integrating the HockeyApp into your ReactNative app, please do ping me and I will be happy to help.

Apache Cordova has a pretty vibrant ecosystem of plugins. These plugins usually comprise of 2 parts - a native component that interacts with the device APIs and a JavaScript layer that makes these APIs simpler. Creating Native Modules with React Native also has a similar model of operation.
Given that more than 1000 Cordova plugins exist today to access device capabilities from JavaScript, it only seems logical to be able to reuse these for any ReactNative projects.

This blog is about the internals on how Cordova plugins can be used with ReactNative. The scenario only works on Android for now, as shown in the video below.

[Link to Video]
Update: Note that the demo adds plugins using node_modules/rncp/cli/cli.js. Now that the module is on npm, you could simply use node_modules/.bin/cordova-plugin add pluginname.

Step 0: Installing CordovaPlugin Adapter

Using native modules in a ReactNative project usually means adding the
dependencies in build.gradle, adding the sub project, etc. Instead of
doing this for every single cordova plugin, I was able to leverage
Cordova's node based plugin manager (called plugman). Hence, you just add the
dependencies once to set up the "Cordova Adapter ReactNative" (lets call
it CARN).
CARN is on npm and you can install it in the locally in the ReactNative project.

Step1 : Installing the plugins

With CARN installed, you can use it's command line interface to add plugins.

Add plugins using $ node_modules/.bin/cordova-plugin add cordova-plugin-name. This command simply relies on plugman to download the plugin from npm.

Once the plugin is downloaded, plugman looks at the plugin's plugin.xml file to determine any other plugins that need to be installed as dependencies - all these are stored in a assets/xml/config.xml to be used later.

Plugman then copies over the javascript, java file and JARs to the appropriate locations. These locations are defined in CARN's build.gradle.

Plugman also looks at plugin.xml to change config files like AndroidManifest.xml to add any activities or permissions that are needed for the plugin to work well.

Finally CARN combines all the Javascript files from all plugins using Cordova's module builder system (which is similar to browserify).

This is later "required" by the ReactNative's index.android.ios to start using the plugins.

Step 2: Using the Plugin

All Cordova plugins are available when CARN's javascript is "required" in ReactNative JavaScript. For example, the API that cordova-plugin-device exposes is typically called using require('react-native-cordova-plugin').navigator.device.getInfo(successCallback, failCallback).

All the Javascript APIs that Cordova plugins expose eventually call a cordova.exec(service, action, callbackID) method, which in turns calls the exec method exposed on the Java side using the WebView.addJavascript Interface. Unlike Cordova, Reactnative does not have a webview. Hence we hijack this method and use ReactNative's Native Modules to expose a corresponding "exec" method on the Java side.

Once the "exec" method on Java is invoked, it starts Cordova's Java PluginManager. This PluginManager consults the assets/xml/config.xml to look at the ClassName for the service (which is the plugin name) and calls the corresponding action on that classname provided by the plugin.

All results are asynchronous, so the plugin continues working on the method call. A plugin may have to invoke another activity like in the case of a ContactPicker or Camera using startActivityForResult.

The plugin is finally done with its work. In case another activity (like Contact Picker) was invoked, a contact is picked and our ReactNative's MainActivity is started again with the result from the previous activity. This result passed to the plugin's onActivityResult callback.

Now that we have the result, the plugin manager calls the WebView to deliver the result back. In our case, we instead have a MockWebView for ReactNative that uses the RTCEventEmitter to deliver the result back.

We already have a listener on the JavaScript side that looks at the result that also contains a callbackID. We consult our list of callbackIDs, and call the successCallback or failCallback.

The code itself is pretty straightforward and mostly reuses Cordova's plugman and PluginManager for most of the work. The github repository has all this code, and the best place to start looking would be the CordovaPluginAdapter for Java side of things, and index.js for Javascript side of things.

Next Steps

The steps described above are Android specific, and I am assuming that we would have a very similar flow for iOS too. Including almost all of Cordova for Runtime may not be not most efficient, and I am working on trimming down the dependencies to make this even faster.
Would love to hear any feedback you may have and any suggestions or contributions for Android, or the iOS implementation.

[1] Note that the demo is on Windows. ReactNative still has issues to work with Windows, and I had to modify some files in ReactNative to get it to work on Windows. [2] If you are using Windows, I would recommend using this Android Emulator - it works with HyperV on and is super fast !!

As a web developer, I have always loved the quick edit-preview cycle when working on web pages.Tools like BrowserSync enable features like live reloading when HTML/CSS/JS change and even synchronize scrolls, clicks and form inputs across multiple devices.
Since Apache Cordova is based on similar web technologies, the goodness of web development can also be transferred over to Cordova projects. There are a couple of plugins like gap-reload, and they seem to integrate pretty well with live reload. This blog is about my experiment to build something that leverage the capabilities of Browsersync.

Demo

Here is a quick 2 minute demo showing the usage and capabilities of using browser-sync with Cordova. The video shows features like automatically reloading the app when files are edited, mirroring clicks, scrolls and form filling, and even the ability to use native, platform specific plugins.

Integrating into your projects

The video above shows how this can simply be added as a Cordova plugin. The project's repository also describes how to use this as project hook, instead of having to install it as a plugin. The repo was also structured to be used in any existing workflow and to that effect, the individual modules can also be "required" and used in grunt or gulp.

How does it work?

BrowserSync watches for file changes and then reloads the web page, or injects the HTML or CSS into the page. Cordova is similar since is displays content using Webviews. The steps for this plugin are

The lib/browserSyncServer starts up browser-sync, watching for changes in your project's www folder, and serving the entire project folder using its static web server.

When anything in your project's www folder changes, cordova prepare is run to copy over the changes to platform specific folders. Other hooks are also run to ensure that any transformations you have in your workflow (like compiling less/sass files) continue to work.

BrowserSync mirrors scrolls and clicks using the current page's path as the key. This is a little tricky for Cordova since the webview content is loaded from platform specific locations (like platforms/android/assets/www/index.html and platforms/ios/www/index.html). To normalize this the hack that I had to put it was to monkeyPatch the canSync function that checks if events are from the same page; the change was to ignore platform specific folders.

Caveats

The fact that the webview content is served from a http:// server rather than the local file system makes it a little different from the actual Cordova app. Things like cross origin ajax requests, local storage and cookies, cdv:// files, etc work differently. This is not a major issue since most things like visual layouts and plugins work fine and account for most of the development time. Once the app is ready, the plugin/hook can be removed to test it one final time with the actual app.

Next Steps

To deal with the above caveat, I am looking at ways to use the HTML injector and continue serving the contents from regular cordova webview's origin. Watch out this space for my experiments with Cordova

Performance is a feature; a fast website has direct impact on revenue and user experience. Many of today's fast websites are built on top of front end frameworks (like React or Ember) and assume that they are also fundamentally fast. While many of these frameworks do place a lot of emphasis on performance, the per-commit regression tests that are run in continuous integration environments (like travis) only check for functional correctness.This blog post is about my experiments to set up a complementary continuous integration system for monitoring rendering performance regressionsin some popular front end frameworks.

Key metrics like frame rates, paint times, etc are recorded and plotted in a graph. I run the perf-tests for every released version of the framework. Running it for each version can call out abrupt variations or gradual trends in the metrics and can help identify performance regressions. For example, here are some significant variations in the numbers and the potential reason for each case.

Ember's frame rate graph shows improved frame rates for version 1.13.0. Ember's new rendering engine is fast as shown in this Ember with Glimmer video. The tests validate that with concrete numbers and indicate a 25% performance gain.

When react moved from 0.5 to 0.8, there was a marked increase in performance as shown in the graph. I suspect there were due to the size reduction and upgrades in browserify and jstransform. The performance metrics have been more or less stable since then. Some good optimizations planned for 0.14, and I am hoping to see the graph show the differences too !!

Testing all the Ionic components clearly shows that the web apps are much faster on Chrome on Android than a Cordova app, possibly due to the tax of using WebView.

The tests for bootstrap shows how the project has evolved over time. It shows the cleanup that happened towards the 3.0 release and how the components matured towards the end of the 2.0 release.

I plan to continue running the tests and updating the graphs as more versions are released. Currently, I manually trigger the tests for each version and
am exploring ways to automatically kick off the test, either by polling bower, or any other alternate mechanism.

The test infrastructure is simple to set up. Each framework has canonical web app or a set of components rendered on a web page as the target for testing. These target apps are available on github for react, ionic, ember and bootstrap with instructions on how to set up and run the tests. The metrics were collected using browser-perf and stored in a couchDB database and drawn as a graph using perfjankie.
Though js-perf like test cases that test that test the speed of an event loop exist, there may be significant differences between the event loops and when the browser actually draws a frame.

Help needed - I hope to cover more frameworks and versions, something like todoMVC for performance. If you would like to add something similar for your web framework, ping me and I would love to help. If you have any changes to the canonical app to make it more like a real world app, or additional test cases, please do send in a pull request to the github repositories, or leave a comment.

Ember.js recently announced that Glimmer landed in Canary and is now available in 1.13.0-beta.1. Glimmer is a new rendering engine for ember, aimed at improving re-rendering performance. This blog post tries to quantify the improvements that Glimmer brings to Ember. There have been number javascript metrics to measure the performance of rendering, but this experiments tries to compare the direct impact of the changes to how the browsers render pages.
In a previous blog post where I had compared the rendering performance of Javascript frameworks, the ember implementation did use glimmer and it was comparable to the smoothness of other implementations.

The Experiment

For the experiment, I compared the runtime rendering performance of Ember#1.12.0, 1.12.0-betas and the latest 1.13.0-beta.1 that has glimmer included. The sample app used for comparison was a version of the DBMonster picked
up from github. DBMonster1 seems to be a good candidate for this test since it has a lot of independent updates in its various components. Each version of ember was tested on Chrome on a Mac, and Android 5 (Nexus 7) and results were aggregated over 30 runs.
Metrics like frame rates, and times for layouts and paints were collected using browser-perf. The test code itself is pretty simple and is available as
a gist with instructions to run the tests.

Results

The new rendering engine definitely improves frame rates by 25% on Desktop Chrome.

Frame rates (from rows meanFrameTime and mean_frame_time2) are better in the 1.13.0-beta.1 version where glimmer is included. The variation in frame rates (from row frame_discrepancy) is also much lower in glimmer, making the page smoother overall.

While average layout time is marginally lower in glimmer and average paint time marginally higher, the biggest difference is the average time spent in recalculating styles; glimmer spends much less time recalculating styles. The average painted pixel area is the same though!

Interestingly, the nodes impacted per layout operation is higher with glimmer.

From Javascript perspective, glimmer spends much lesser time in function calls and garbage collection events. The number of function calls are higher, but the low time per function call keeps the page responsive. Not surprisingly, the number of timers (or ticks) are also higher in glimmer.

Composite layers seems to tell different stories on desktop and mobile browsers.

The raw results are available in this google spreadsheet. Look at the highlighted rows for interesting data points. If you are able to draw some interesting conclusions about glimmer from the data, please do leave them in the comments below.

Next steps

I have been using browser-perf to monitor the performance changes in ember and am excited to see how 1.13.0 final and 2.0 improve the framework. I currently only run DBMonster, but if you have any other benchmarks or sample apps, please ping me and I would love to run the tests.
Just like functional tests catch functional regressions, it would be
interesting to see if these perf tests can run in a continuous
integration environment, to catch perf regressions.

1 Note that this data is only for DBMonster and may not be universally correct. It could be different for your apps. As with any performance profile, I would recommend running tools like browser-perf on your app to monitor performance regressions. 2 mean_frame_time is calculated from tracing logs while meanFrameTime is calculated using requestAnimationFrame. The former is more accurate and the latter gets called when the UI thread is free and may not always correspond to what is drawn on the screen.

Traditionally, icons on a web page have been drawn using bitmaps images (png, jpg, etc). These images are often stitched together to deliver them faster over the wire. However, bitmaps do not scale well on different screen resolutions - a problem for responsive web pages.
Font Icons have proved to be a great alternative with many popular UI frameworks bundling fonts with icons. Another common alternative is to use SVG; the purpose of SVG is to draw scalable vector graphics anyway!
While there has been a good amount of research about optimizing network performance of Fonts Icons or SVGs, I could not find data that talks about the performance of these icons after they have been sent and rendered on the browser.

Background SVG - SVG inserted as a data:uri in a class applied to repeated elements on the web page

The web page had each of these icons repeated multiple times making the page long enough to scroll.

Test environment

The tests were run on the Icons for Ionic that are available as SVGs and a Font.
For each icon, 3 web pages were created, one each using font, inline svg and background svg. These pages were hosted on a static web server and accessed from Chrome over a LAN. The tests were run on Chrome 41 on an Android Nexus 7 device, with all other applications closed.
The tests were driven by selenium using brower-perf. Each test waited till the page was loaded on chrome to eliminate network anomalies. Each page was then scrolled and data was collected from chrome timeline and tracing using browser-perf.
The repository is available on github, and the README file has instructions on running the tests in your environment to get results. If you have run the tests, please do ping me and we could compare the results.

Running each test multiple times, and watching trends for each of 733+ icons should eliminate noise in the data we may have. Here are some of the trends that I noticed. Basically, the performance order was, from best to worst was

Font Icons > Background SVG > Inline SVG

The frame rates were best for Font Icons and worst for inline SVGs, when measured using request animation frame, and using tracing benchmarks. There were cases when fontIcons and backgroundSVG were pretty close.

While results for time spent for CompositeLayers was not conclusive, Layouts and Paints showed consistently that inlineSVG took the max time. For paints, Background SVGs consumed the least time on painting, while the best candidate for Layout was not conclusive.

Background SVGs seem to have one paint event for each icon drawn on the screen.

Layout for all of them was steady since there was no change in the DOM positions of the icons once the page was loaded.

Only the background SVG had a paint event, and a Paint Image event for each icon drawn on the web page

Inline SVGs are also rasterized the most.

Time for request animation frame was maximum for Inline SVGs, while time for FireAnimationFrame is it inverse

Background SVGs took the maximum time for updating Layers.

The raw results can be downloaded and are available in this spreadsheet. The results are best viewed when the file is downloaded and opened in a spreadsheet program; I found the Web viewer too slow for so much data.

Extrapolating the results, I may conclude that

Background SVG and fonts are treated like images, and inline SVG is expensive possibly because of the extra DOM nodes that can be later modified.

Background SVGs data URIs are similar to png or other data URIs, just that they also scale.

Number of nodes in SVGs seems to have no correlation to the data, possibly because the number is very small to have a significant impact.

Conclusion

Looking purely from a performance perspective, fontIcons seem to perform the best. However, they cannot be manipulated like inline SVG icons and are restricted to animating as a single unit. As with any web performance experiment, these are interesting results, but when using it on your site, may be they should be tested (with something like browser-perf), just to be sure !!
Watch out this space for my other experiments with performance.

Background

Runtime performance is one of the key focus areas for most modern javascript frameworks. During a talk at the recent ReactConf 2015, a simple app simulating DB queries was used as a way to compare angular, ember and react. Many independent implementations of the test application have emerged since, each trying to showcase how fast the corresponding framework is.
Instead of comparing performance visually, I wanted to see if I could quantify the results and have a way to reproduce them. Note that there is a metric measuring memory, but rendering smoothness may not correlate directly to this metric.

Test Environment

The test suite compares the smoothness of dbmonster implementations in each of the framework. I ran the scroll/smoothness tests from telemetry using browser-perf to collect metrics like frame rates, layout times, nodes impacted per layout, etc.
Since some of the implementations in the original repo seemed slow, I picked up the fastest implementations for each of the frameworks that I could find. I was able to compare the apps written for react, ember, underscore/backbone, ractive and paperclip.
To run the tests,

Metrics for each framework were averaged over 10 runs. 4 such averages were collected for each framework to remove anomalies and to ensure that the metrics were
similar in relation to each other. Note that I have not looked under the hood for each of the implementations.

Test Results

Here are some of the interesting facts from the tests.

Frames Per Second (calculated using RequestAnimationFrame) - Paperclip was the smoothest, followed by ractive, underscore/backbone, ember and finally react.

Frames Per Second (using about:tracing benchmarking) - This metric is a little different and is much closer to actual screen frames. I was not able to collect the data for ractive or underscore. This metric indicated that though ember did not do well on animation frames, it definitely had better times when drawing on the screen.

Average time spent painting on the page was highest for ember. React was much lower and the other three were even lower, with similar numbers.

Layouts - React's virtual DOM shines here, making it spend the minimum time in layout operations. Underscore seems to be the max, with ember and others in the middle.

GC Events - Ember spent the maximum time collecting garbage, while paperclip was much better again.

Nodes changed per layout cycle - Ember seems to change the maximum nodes, while React's Virtual DOM seems to show its work again here.

React was the only framework that seemed to emit events and parse HTML, the latter possibly due to JSX. The events may be the cause for lower frame rates in react despite it performing better at layouts and paints.

Here is the entire spreadsheet, showing averages from each test set, with interesting rows marked in green.

Next Steps

If you have trouble running these tests and see different results, please do ping me. If you are also aware of faster implementations or would like to try this on your framework, I would be glad to help.
Clearly, developers are not just optimizing to deliver content to the user fastest, but also working to ensure that the content enables a smooth experience. I also read about the glimmer implementation for ember and was hoping to work on a test suite that would measure the improvements in event incremental commit. I was also hoping to work a little more with React and Radium to profile for performance.