As a Windows developer who uses git and gcc, I found it easiest to install MINGW to help work in a console (Git Bash here is a fantastic shell extension!). Unfortunately, it’s been a while since I installed it and I forget the version I’m using. After a bit of googling, it seems someone figured out years ago how to figure this out in a shell script (stahta01)

It looks like a few people have been hitting my blog trying to find information on integrating Popcorn.js and Big Blue Button. I thought I’d take the opportunity to give a nod of the hat to a colleague, dseif, for his recent contribution towards making this possible at Hackanooga.

After a hiatus from the internet that seemed far longer than the month it actually was, I’m back online.

I’m looking to continue my work on popcorn.js’s parser support, specifically with cleanup and adding styling support. After a refactoring and code preparation for what is to come, I’m ready and read up enough to begin. Of the three parsers in popcorn.js which support in-spec styles, I’ve decided to focus on the TTML parser over the other leading candidate for a first convert, SSA/ASS.

As with all the parser styling support, this will be a task of mapping the spec styles to CSS. While the TTML spec is significantly (read: magnitude of 10) larger than the SSA/ASS spec, it should be easier because the style names and behaviours are so similar to what I have to work with in the browser (JavaScript and CSS). In fact, near the beginning of the TTML spec Styling section, the W3C advises:

In particular since [CSS2] is a subset of this model, a CSS processor may be used for the features that the models have in common.

That’s perfect! TTML is an XML-based format, so parsing is already made easier by JavaScript’s XML and DOM facilities. This means that after extracting what I need to, I can minimize the work involved with mapping and validating style names and attributes, instead passing them through to the browser for validation and processing. It’s not all easy, however. Some extra rules will need to validated, such as style inheritance from other styles, invalid/inaccessible inheritance, and ensuring styles are applied to the appropriate elements.

I’ve already started by parsing some basic region and style data (TTML’s equivalent to CSS classes), and structuring some unit tests. What remains at this point is extracting inline styles, and applying all styles to the displayed text. And, of course, validation rules, further unit tests, demos, and tackling any as-of-yet-unforeseen issues. It’s already looking to be a fun, wild project.

I often work by committing small, incrementally stable portions of my work to my working branch, then pushing the entire thing when it’s done. While great for working, this makes for a very cluttered commit history. Not only that, but it also makes it more difficult for peer reviewers to know what changes you’re trying to commit and contribute. To address this, I did the lazy (yet ironically more work) step of making a copy of my files, making a clean working branch, and applying my accumulation of changes in one copy-paste operation. In other words, I was manually squashing my many commits into one.

Turns out git has this capacity built in. I knew of it, but I never bothered to look into it. Turns out, it was simple, and could’ve saved me a lot of time. This blog post outlines the process:

git rebase -i HEAD~4

With that command to begin interactively picking and squashing the last 4 commits. This is done by changing the commit log from picking the last four commits separately to squashing three onto the most recent. So in the interactive editor, this:

It’s been a while since I’ve done some more serious JavaScript development, and even then it was typically very procedural. With adding version-dependent features for Popcorn’s SSA/ASS subtitle parser, I figured this would be a good opportunity to learn. I remember from the past that JavaScript uses Prototypical Inheritance, something I have no experience with in other languages. Luckily, I didn’t have to go far to find some helpful resources.

Inheritance and working off the prototype are what I’ve struggled with most, so I’m including some code snippets below. Most are inspired or outright copied and adapted from the MDN article. It’s also on pastebin.

// Define a constructor for an 'SSAParser' object
function SSAParser(text) {
this.text = text;
this.lines = [];
}
// Define a function for our parser object. So that memory is only allocated for it once across many objects, we put it on the prototype.
// Inside the prototype, 'this' refers to the instance object
SSAParser.prototype = {
parse: function() {
return "SSA parser: " + this.text;
},
getName: function() {
return "SSA parser";
}
};
// Instantiate with 'new' so that the properties are put on a new object (called 'this' in the constructor)
var parser = new SSAParser("1");
var parsed = parser.parse();
// Not using 'new' will cause it to be like a normal function call.
// The result will have 'this' point to the function itself, causing unpleasantness
var parser = SSAParser("1");
try {
var parsed = parser.parse(); // SSAParser doesn't return a value, so 'var parser' is undefined, causing an error
} catch (e) { alert("Please use 'new'"); }
// Being clever, we can avoid using 'new' by always calling the function directly, and explicitly specifying 'this' to be our parser object. This results in far more typing.
var parser = {};
SSAParser.call( parser, text );
var parsed = SSAParser.prototype.parse.call(parser);
// Everything on the prototype is publicly accessible
// Using closures, we can adapt the above method to have hidden helper functions that aren't publicly accessible
SSAParser.prototype.useHelper = (function (){
// This function is defined once, but is not directly on the prototype.
// It is not publicly visible, but as it's defined in the same closure as a function on the prototype, it is accessible
// 'this' points to the 'getHelperText' function by default, unless we override it
function getHelperText() {
return "From hidden helper: " + this.text;
}
// This function ends up on the prototype, we can access 'this' properly from within it
return function() {
var text = "From helper: " + this.text;
// We can call getHelperText and reference our instance members within.
// It becomes a private member function
text += getHelperText.call(this);
return this.text;
}
})();
// Inheritance, ASSParser extends SSAParser
function ASSParser(text) {
// Explicitly call parent constructor
SSAParser.call( this, text );
}
// We set the prototype to be our base class and update the constructor
ASSParser.prototype = new SSAParser();
ASSParser.prototype.constructor = SSAParser;
// Override parent definition of getName
// Unless we store the old definition of 'getName' we can no longer call the parent version
ASSParser.prototype.getName = function() {
return "ASS parser";
}
// Define new function, which references parent
ASSParser.prototype.getPi = function() {
return Math.PI;
}
var parser = new ASSParser("sample");
alert(parser.getName());
alert(parser.getPi());

While other commitments have kept me from contributing a lot of code to Popcorn.js in recent months, I’ve had plenty of time think about how to tackle a group of related tickets assigned to me. Approximately a year ago, for Popcorn version 0.3 and 0.4, I was responsible for incorporating some earlier subtitle parsing into popcorn. That grew into Popcorn supporting text display for 7 standardized subtitle formats. 5 of these formats also include their own in-source formatting, each with their own syntax. They grouped into two main classifications:

XML-based

TTXT Format

TTML Format

Text-based

WebVTT

Sub-Station Alpha (SSA)

Advanced Sub-Station (ASS)

With five different ways to represent the same information, I plan to avoid duplication as much as possible. When done properly, this not only makes it easier to initially develop, but makes future maintenance and improvements simpler. A real example of why this is useful is that in May/June 2011 I had a nearly-complete initial version of a WebVTT parser, with just a few CSS-related quirks to work out. After having put it down for a bit, the standard evolved such that much of what I had done was obsolete. Since it wasn’t modular, most of it has been discarded into a file on my desktop.

Looking forward, I see two independent cases for code maintenance: format evolution and CSS evolution. Similar domain problems have been solved by translating the source (raw subtitles) into a universal intermediary language, or a machine interpretation, which then gets processed and output. Human language translation has been approached like this, as have cross-platform programming languages. Both Java and .NET compile to an intermediary language (called bytecode and MSIL respectively), which then is translated to the desired, platform-specific output at runtime. While there is a very small overhead with a second translation, it has been said by a Microsoft engineer that maintainability increases drastically.

Strengthening the argument for this approach is that certain common display functionality (CSS class lookups, creation, etc.) will be required by multiple parsers, and changes must be reflected in all parsers. It is with this knowledge that I plan to stand on the shoulders of giants.

A while ago I researched some of the differences in modifying the structure of an HTML document using innerHTML or the DOM. The summary at that time was that, while using innerHTML means less code, most modern browsers provide negligible differences in execution time. That is, of course, provided you minimize modifications to the active document (web page) directly, as each will potentially cause a redraw operation. The bottom line was that it was personal preference, with a tradeoff on less code or more standards-adherence and reliability.

There is an epilogue to this story:

My personal preference is to using standards for cross-browser reliability. Unless file size is a concern (low-bandwidth networks) or large scale input modifications are needed (re-creating large portions of a document programmatically), working with the DOM directly is easier in the long run. It seems the output of innerHTML can vary across browsers, depending on the node structure underneath. This can make comparing DOM structure (unit tests, for example) by innerHTML difficult to get right cross-browser. Where as most browsers will concatenate nodes together without a space (<div><span></span></div>), IE seems to in some/most instances use spaces between nodes on output: (<div> <span> </span> </div>).

While this may seem an edge case, it’s one more argument to be mindful of using DOM-standard tools and apis.

I’ve been hearing a bit of buzz about IE 10 increasing their support for HTML5 and CSS3, so I thought I’d take Popcorn.js for a whirl on it. VirtualBox made the install process pretty painless, I only had to think for myself once. Since it’s built upon Windows 8, I had to download the developer preview. There were quite a few tutorials on the web outlining the install process (some decrying that certain VMs didn’t work at the time), but VirtualBox seemed the most tried and true. In fact, between the time those articles were written and I tried this, newer VirtualBox versions had been created with special “Windows 8” configuration setup. I first followed this PCWorld one, but eventually felt comfortable winging it.

The only hang up I had with it was with hardware-level virtualization. I consistently would receive VM errors when starting up, and ignoring them prompted with with “Windows install error, please reboot” and HAL_INITIALIZATION_FAILED dialogs. Apparently, my processor supported hardware virtualization, and VirtualBox/Windows was trying to use it, but it was disabled in the BIOS. Changing that one thing allows for a clean and easy install.

Things are quite slow on both my host and guest OS’s right now, but I guess that can be expected with only 3 GB to run both, plus VirtualBox, Apache, Firefox and IE. Popcorn itself performed quite well, especially considering the pre-beta nature of Win8 and IE10 and the aforementioned memory constraints.

I was able to test with minimal effort, as my host OS has Apache setup on it, and Win8 is configured to connect to the network through it (the host OS) using NAT. All I had to do is, on Win8, enter the IP address of my host OS, followed by the port I run Apache on, and I can run everything I need remotely.

First off, jQuery UI is great. It makes creating a responsive UI simple, and the tabs capability is great. However, I recently came across a use case where I wanted to dynamically alter the url of a tab (specifically, the query string) dynamically, something it seemed the library didn’t allow. The load-time href of the tab was wrapped and inaccessible at run-time. The fix would be simple, but the source file I was working with was the minified output of their customizable web-based build system, making modification impossible. Downloading fresh, unminified source was equally impractical. So I looked into building it myself. It turned out surprisingly easy.

Obtaining the source

First, I downloaded the latest stable source (1.8.16 at this time), and took a look through it. All the modules had their individual files. In the build directory, I noticed “build.xml”. This is an Ant build configuration file, so I then proceeded to setup Ant.

Setting up Ant

It took reading some documentation, but this was equally simple. I already had Java, so I jumped right to getting application:

Building the Source

There were a few other build options I could’ve chosen. “Minify” will just output the minified source, without license info, documentation, or zipping it.

I also experimented with building jQuery UI 1.9 beta, but ran into some issues. Getting the source from github was simple, but they’ve replaced their minification engine from the Google Closure compiler to uglify.js, since it “saves 4 minutes per build, and actually produces slightly smaller files“. This posed an issue, since the current version of the code executes uglify.js from a shell script. I converted the Linux shell script to a Windows batch file, with a little help to get the directory name. This all went fine, until it came to actually executing the js from the command line. There is no console js engine by default in Windows, and using the js.exe output when I built Firefox seemed to error when it ran the script (issue with referencing “global”). Since using 1.8.16 was alright for my immediate purposes I didn’t look into further solutions.

I could’ve tried installing Rhino to get around this, but it’s something to try in the future. Another solution is obviously to try running it on Linux. I spent a bit of time updating my Ubuntu VM install after getting everything working. Until I try and venture into building 1.9, I’m just fine developing off of 1.8.16. After all, it works great.

More often than not, solving problems through software is simple. The issue/requirements are there in front of you, clear and needing to be solved. Every once in a while though, something stands up and teaches me a lesson.

Every system relies on external components. Whether it’s a third-party library to help with a specific purpose (jQuery, ComponentOne), a runtime to compile applications against (.NET, Java), or a compiler library for simple operations (iostream, math.h), all systems have dependencies. They’re everywhere.

99 times out of 100, when something goes wrong with the code, it’s my fault. These are the cases I mentioned earlier: clear issues, simple solutions. It’s the tricky ones that exist outside my program’s black box, where creative coding and problem solving comes into the picture. These are the ones I like. They teach me to think outside the box, to not take anything for granted, and that above all, we’re all human.