Last week, Bootstrap 3.0.0 was released. It has almost been two years since bootstrap has existed in the wild, helping web developers hide their imperfect aesthetic talents. Personally, it saved me hours at hackathons trying to design a user interface that looks presentable. I thought it would be fun to look at the two year history of this awesome open source project from the perspective of a web developer who thinks more about performance and jankfree-ness now.

The results

Since most people are interested in the results, I thought I would put it above the part of "how this is done". Here are some interesting trends that I noticed from the graphs.

Most components show that they started off as simple CSS rules, but as they got complex, the performance seems to drop.

The performance drop seems to stop at 2.3.2 release and looks like the latest 3.0.0 release was aimed at making things faster. A lot of components in 3.0.0 are way better than their 2.3.2 counterparts.

Looks like the developers have taken a second look at most components and tried to re-write or re-architect them to make them better. Most components have a sudden increased performance between 2.1.* and 3.0.0

The base css has grown bigger over time and hence, the performance has reduced.

Some components (css classes) did not exist in the early versions and the graph show how the performance increases when CSS classes were introduced for them.

There are significant performance changes between the RC and the final versions of 3.0.0. This could be due to incorrect CSS files I generated, or was there something different in the final release ?

Some of my data points are completely skewed (nav for example), and I may have to re-run the tests to get good data.

I am not a statistician and my comprehension of the results could be wrong. If you think some of my interpretations are crazy, please do drop in your opinions in the comments. If you are curious on how this data was generated, read on.

Testing Methodology

Topcoat is another great CSS framework with emphasis on performance. The most impressive part of the framework is the integration of the performance test suites to daily commits and they way the developers discover any performance regressions introduced.

TOPCOAT ROCKS !!!

Inspired by this system, I decided to use telemetry from the Chromium repository to run similar tests over the various versions of Bootstrap.

Bootstrap Versions

Unlike Topcoat, bootstrap has a much longer history and collecting historical data over commits would be hard. Instead, I decided to pick up commits that correspond to tagged releases and enumerate them. Though the evolution of the build process for Bootstrap shows the framework maturing, it was hard to automate the builds that had Make, older versions of some npm components and finally Grunt. I just manually generated the bootstrap versions and checked them into the bootstrap-perf repository as they would not change anyway.

Generating the test files

The next steps in testing is generation of the test files. Like Topcoat, I wanted to measure the evolution of each component separately. Most of the components listed in the example page were written and the individual test pages with specific versions of bootstrap are programatically generated. Check out the Grunt file in the repository to see how this is done. These test files are also copied over to the Chromium source where the telemetry suites are started.

Collecting the data

Once the files are copied over to the Chromium source code, the tests are run. The telemetry page_set jsons can run the tests for all the components, or individual component. Once the tests are run, the results are available as CSV and can be uploaded to the CouchDB server using the web page in the gh-pages branch, or online. The tests were run multiple times and the raw data is available here.

Analyzing the data

This couch view simply returns the stats for each component over different versions. This data is drawn on the web page using jqplot. Also note that I am saving the data on iriscouch, but to ensure that the database does not die out due to traffic, I have replicated the data on cloudant.

Conclusion

Two years may seem like a long time in web-scale-time, but with the tools available today, creating jankfree sites is easier. I am also working on a version that could use xperf to get similar data for IE, both for topcoat and bootstrap. Side note: This is an independent analysis that I did over a weekend and not authoritative. However, iIt would be fun to see such a performance suite become a part of the official bootstrap continuous integration process.

I was at Battlehack, a hackathon organized by Paypal on 10 and 11 Aug, 2013. I teamed up with Hakon Verespej and hacked the ARDrone to build an interesting project.

The Pitch

The hackathon was themed around "local" and "neighborhood" and the first problem we thought of was finding parking. We thought that it would be fun to use the AR Drone to fly around and find empty parking spots and hold it for you till you get there.
The AR Drone is programmable and would be launched using a phone app. The drone would fly around to the spot, identify empty spots and return back the location.

The Execution

The AR Drone is programmable and we decided to use the node-ar-drone module to control it. The phone app is a pure HTML app that sends a message to a node server to launch the drone. The node server starts the drone and moves it around.
The phone app constantly pings the server for the latest status and also allows for drone to be called back.

On the server, OpenCV is used to pick up the camera images and analyze them for detecting the "emptiness" of a parking spot. In the interest of time, we just look for canny lines to identify empty spots. For the demo, the drone flies lower when it identifies the presence of an object under it. The source code of all that we managed in those 20 hours is available on Github.
We also had a small test page that let us control the drone manually and showed us the results from running the OpenCV filters.

Problems

The ARDrone 2.0 with GPS had not released it during the Battlehack - implementing the "search for a parking spot" was not possible without the ability to tell the drone to go to a certain location.
Getting the list of parking spots was also hard and we could not find a database that could give us such a geo-tagged map.
The biggest problem however was the stability of the drone itself. Given that we were demonstrating this indoor, it was hard to ensure that the drone would just hover at one place and not drift. The drone could not be controlled as precisely as we wanted.

The Presentation

After almost 20 hours of non-stop coding, here is what we ended up with.

On the whole, it was a fun event and I was able to work on something interesting. Hakon was a great team mate and I was amazed by dedication to get it to work; would love to work with him on another hackathon.