If you are in the process of learning Chinese, or if you would like to see an example of a keyboard-accessible HTML UI, then you might find it interesting.

Browsable Chinese characters

One key to browsability is a combination of the usual arrangement by radicals and strokes and not requiring any page navigation. I think the idea of looking up Chinese characters as you would in a dictionary is a valuable skill, and one that improves as you learn more by doing it more. When I started learning the Chinese language, computer support was barely getting off the ground. If I wanted to look up a character whose pronunciation I didn’t know, I had to look it up by radical and strokes.

Nowadays, if you can write a character, however badly, you have more options:

On Windows, you can use the built-in Tablet PC Input Panel, which requires only that you have a tablet input device.

Still, I think you learn a lot and get a lot of satisfaction from looking up a character by radical and strokes. A side benefit of the Chinese Character Browser is that when you see 6,763 characters broken down by radical and strokes, it doesn’t look like so many. It gives your mind a sense of the entirety of what’s before you, if you are setting out to learn as much as you can.

Keyboard accessibility

Another key to browsability is being able to use the keyboard to do everything. There was a time, before the Web, when keyboard support was central to the creation of almost any UI. I think it’s clear now that keyboard support in HTML applications has fallen permanently by the wayside. Even so, I wanted to see what it would be like creating an HTML UI that was fully keyboard-accessible.

One important note here is that even though I created a keyboard-accessible UI, it’s not accessible in the Section 508 sense—at least, I doubt it. Unfortunately, Section 508 compliance is generally equated with working well with specific screen readers. My last look at screen readers a few years ago revealed an almost complete lack of support for dynamic HTML applications. My only goal here was keyboard-accessibility, not compliance with a specific screen reader.

Here’s how the keyboard works in the Chinese Character Browser:

Focus is indicated with a focus rectangle, so you can find/follow the focus with your eyes.

Tabbing moves the focus from left-to-right and top-to-bottom. Shift+tabbing moves in the opposite direction.

When a list has focus, selection is visually indicated and can be changed with the up/down arrows, page-up/down, home/end, and ctrl+home/end. (Home/end operate on the visible items. Ctrl+home/end move to the beginning/end of the list.)

So far, that’s just standard keyboard stuff. I came up with a few “extras” to fit the tool:

You can use the left/right arrows to move between lists (tab and shift+tab also work).

Ctrl+up/down jumps to next higher/lower stroke count (or additional strokes, depending on which list is focused).

Pressing a number key jumps to that stroke count (or additional strokes, depending on which list is focused). To go higher than nine, use shift+#. You can’t go higher than 19 using this method.

There are various ctrl+shift sequences that can be used like keyboard accelerators. These are labeled in the UI:

Ctrl+shift+c: toggle between GB2312 and Big5. See the API doc for more information.

Ctrl+shift+r: toggle between using kRSKangXi and kRSUnicode for radical/stroke information. See the API doc for more information.

Ctrl+shift+f: cycle through the font list.

Ctrl+shift+s: toggle the sort order of the main radical list.

Limitations of character-based study

It’s worthwhile acknowledging that studying characters is only part of the whole picture. You will not learn Chinese simply by studying individual characters.

Many characters have different pronunciations depending on how they are used.

This tool simply lists out the different possible pronunciations.

Many characters are pronounced with a neutral tone when they appear at the end of a multi-character term.

It’s challenging enough to remember pronunciation and tone of each character. Additionally, you need to remember if a given term ends in a neutral tone. When you first learn a character in its neutral tone form, you haven’t yet actually learned the character; you can’t yet correctly use the character elsewhere when its tone does matter.

Third-tone characters are often correctly pronounced using a second tone.

When learning-by-listening to third-tone characters that have been spoken with a second tone (sandhi tone modification), you haven’t learned the correct tone of the character. You will, in fact, learn the wrong tone if you learn by listening.

I’d venture to say that every first-day student of Mandarin is bombarded with conflicting and incorrect information about what’s likely the very first character they learn: 你. The teacher says unambiguously that it’s pronounced using the third tone, yet goes on to pronounce it (correctly) using a second tone in 你好, yet never mentions why such a blatant contradiction is occurring.

Rating things is all the rage. I suspect that rating scales can influence ratings given and that there are other factors that influence the ratings a person is comfortable giving publicly. Except for the “thumbs up” idea (only occasionally paired with a “thumbs down” to go along with it), there’s been somewhat universal usage of a five-tier rating system, but with no universal definitions of the different ratings. And I’m not sure anyone pays close attention to the definitions anyway.

If you read reviews on (e.g.) Amazon and Yelp, you see ratings given purely as a number of stars, but you don’t see a definition of the ratings. If you post a review, only then are you given a definition of the ratings. The same things apply to Angie’s List, except letter grades are used instead of stars.

Sites seem to not want to try to mess with readers’ perceptions of what it means to be rated one star or five stars; they assume that people just “get it.” However, sites seem to want to help reviewers pick a number of stars for their reviews, as if the reviewers don’t just “get it.” This is odd and asymmetrical.

Amazon

Here are the rating definitions on Amazon, seen only by reviewers (but accessible to anyone who tries):

5 stars

I love it

4 stars

I like it

3 stars

It’s OK

2 stars

I don’t like it

1 star

I hate it

With Amazon’s scale, you give stars even when you hate something (one star) or don’t like something (two stars). This is the nature of using a star-based system that covers the love-hate spectrum. How many reviews have you read where the reviewer said, “I’d give zero stars if I could”? Reviewers don’t want to give out a star to something they hated, even though that is following the definition. This perhaps shows that some raters don’t know what their rating is defined to mean, or perhaps they are more in-tune with the reader who might neither know (or care) nor have immediate access to how the ratings are defined.

What I like about this scale is its symmetry. If you hate something as much as you could love it, you give it one star. If you dislike something as much as you could like it, you give it two stars.

Angie’s List

Here are the rating definitions on Angie’s List, seen only by reviewers (but accessible to any member who tries):

A

Excellent

B

Good

C

Fair

D

Bad

F

Lousy

The A-B-C-D-F system feels the most meaningful to me. Perhaps it’s my experience of sixteen years of school in the U.S., but rating something A-B-C-D-F feels more meaningful than rating something one-to-five stars. An A is coveted. A B is still good, but no one wants to get one. A C is really not good and represents failure to many people. D means unacceptably bad but not a complete failure, whereas F means a complete failure. Unfortunately, this grading system is not internationally universal.

Yelp

Here are the rating definitions on Yelp, seen only by reviewers (but accessible to anyone who tries):

5 stars

Woohoo! As good as it gets!

4 stars

Yay! I’m a fan.

3 stars

A-OK.

2 stars

Meh. I’ve experienced better.

1 star

Eek! Methinks not.

I really dislike these definitions. The difference between four and five stars is “Yay!” vs. “Woohoo!” This just doesn’t connect with me.

A-OK means better than okay. For a restaurant that I found perfectly enjoyable, an A-OK rating sounds perfectly fair and logical. But if I think “just OK” or a C grade, then this turns into a rather insulting rating.

2 stars: This is the “whatever” rating, with only a twinge of negativity. Does this map to “I don’t like it” or a D rating? Not in the slightest.

1 star: This is the only fully negative rating, but it still doesn’t feel as strong as “I hate it” or an F grade.

Yelp, cont.

Many Yelp users register with their real names and pictures, and I think not being anonymous inhibits giving an honest opinion in some cases. This doesn’t apply to restaurants. Many restaurants in my area get hundreds of reviews, so anonymity comes from no one caring who wrote a specific review. Most if not all restaurants that are not terrible (i.e., staying in business) end up with 3.5 stars. My conclusion is that any restaurant that stays in business is liked by enough people to give it a decent rating. Thus, the rating summary for restaurants actually provides essentially no useful information. Every restaurant I’ve looked up in recent memory had about 3.5 stars, regardless of how good it actually was (in my snobbish opinion)—not that a C+ is a very good grade.

You can review anything/anyone on Yelp, and lack of anonymity comes into play for certain categories of reviews. For example, you can rate physicians. It seems that most one-star ratings for physicians are based on someone’s one and only one bad experience. How many Yelp users will publish a five-star rating of their long-term physician? I’d venture to say very few, because most reviewers are reviewing one meal in a restaurant and not the years they’ve been seeing a personal physician. For that matter, who would give their personal physician fewer than five stars and still want to face them during their next visit? Thus, physicians tend to have either no reviews or mostly negative reviews along the lines of, “Stay away!”

The case of the service provider (e.g., someone with a contractor’s license) can be interesting, and I am guilty of this: either I write a five-star review, or I don’t write a review. There’s often no gradation in the reviews. I don’t want to be the first to post less than a five-star review. If I feel the service provider was perhaps unethical or crooked, then I might write a negative review. But the service provider might pester you in return. I’m not just saying this; it happened to me the first week I posted on Yelp.

There are many classics of human nature wrapped up in this. If you had a one-on-one relationship with someone, perhaps starting with an estimate, followed by days or even weeks of working together, and you were unhappy with certain things along the way, society teaches you to always be polite and perhaps let your unhappiness fester under your skin. Now, you have the opportunity to write a negatively tinged review. Will you? Unlikely. If your Yelp persona is in fact yourself, then you’ll still feel the need to maintain being polite. Let’s face it: being polite means, for the most part, being dishonest.

But heaping praise on others, especially publicly, is something strongly encouraged by society.

With contractors, you will often see almost entirely five-star ratings. The value of the ratings, when they are all good, is really about the quantity of them. Knowing that someone was happy is valuable information. A single five-star rating (as the only rating) is not that valuable. Ten five-star ratings lets you know at least ten people were very happy, and that’s good. There were probably some who weren’t entirely happy, but that’s okay. There’s always the risk that things won’t work out perfectly. What I hope for in the reviews (and what I try to give) is plenty of detail.

I think it boils down to this: For people you interacted with just once and had a bad experience with, you are more willing to give them a bad review. For people you interacted with multiple times and had a less than stellar experience with, you will not want to rake them over the coals. Giving praise is easy, but giving criticism is hard, especially when you’re not anonymous.

Before the HTML version, I wrote a polyrhythm visualizer some time ago as a Java applet, when I was learning a number of the Chopin nocturnes. This was spurred on in particular by Op. 27, No. 2 in D-flat major. It’s in 6/8 time (i.e., two beats per measure), with the left hand playing six sixteenths per beat throughout. The ending involves two beats of: seven notes in the right hand and six notes in the left hand (a ratio of 7/6). Not only was the 7/6 a big challenge, but I started noticing patterns that called for some exploration.

Visualizations

There’s a pattern formed by which notes “fire” closest to each other in each hand.

14 against 5

Lines are drawn between top and bottom to emphasize when notes in the left and right hands are closest. The ebb and flow of the time distance itself has a pattern. In the example above, the left hand increasingly trails the right hand, until the midpoint, and then the left hand decreasingly trails the right hand, until they resynchronize.

One key for handling close ratios is that the midpoint involves an even trade-off between hands.

7 against 6

I didn’t want to draw too many lines between top and bottom, so the closeness visualization trumps the equidistant visualization.

5 against 6

Auralizations

After I rewrote my Java applet using HTML canvas, it seemed I should be able to bring it to life with HTML audio. Because of the precise timing required to render the audio, I wasn’t very optimistic about this in JavaScript, and it’s by no means perfect, but it was successful enough to go public.

Update: I’ve tried this on a few different systems now, and it functions horribly and unacceptably except on my development system. Google Chrome and IE9 RC work very accurately on my development system, I swear it. For now, the auralization is best described as experimental.

Timing in JavaScript, Part #1

To keep things simple and somewhat accurate, I wanted to use window.setInterval(...) rather than try to have sequences of window.setTimeout(...) daisy-chained together. I didn’t know what to expect across browsers. My conclusion is that timers in all browsers are very accurate, with Chrome being the most accurate. Chrome timers are least affected by CPU activity within the browser itself and other processes.

Timing in JavaScript, Part #2

The primary weakness bumped into seems to be that simply playing sequences of Audio elements is susceptible to random delays now and then. That said, Chrome is so reliable, it’s almost completely acceptable for the purpose here—essentially a metronome. IE9 RC is also very reliable. Firefox 3.6 is perhaps just under the threshold of acceptability. I found Opera to be too erratic.

Safari 5.0 on Windows delays the playing of all audio elements, thus the UI and the audio are totally out of sync.

Sequencing Audio elements

There are three things worth noting here:

I didn’t find any problems with playing multiple Audio elements simultaneously. The sounds played okay and blended okay.

The biggest hurdle was in realizing that I couldn’t get away with replaying the same Audio element each time it was needed. I needed to create pools of identical Audio elements and cycle through the pools.

I found that repeatedly calling play() on an Audio element sounded erratic, as if the sound got queued up to play but didn’t necessarily play immediately. In fact, I’d venture to say this is the primary weakness of all browsers. A big improvement here, at least for Chrome, was to call play() only when playing the sound for the first time, and using currentTime = 0.0 to play it again later.

To expound on #2: If you play with the demo, you’ll notice there are only three different sounds. I spent a lot of time trying to get three Audio elements to play and replay and blend acceptably. This was a losing battle. The result was almost random noise.

Rather than work with three Audio elements, I’ve created three pools of ten Audio elements. (Choosing ten was arbitrary; a much smaller number would probably work just as well.) For example, playing ten hits of the hi-hat has played ten instances of the same sound (and playing twenty hits has played each sound twice). Using this approach cleaned up the sound tremendously.

Timing in JavaScript, Part #3

As I’ve tried more browsers on more systems, I see that the audio performance varies wildly. It seems that performance is irreparably bad on old, slow hardware. But even on faster, newer hardware, performance varies a lot. I’ve implemented two different approaches to playing audio, and which approach is used can be selected at run time:

The default choice is to load the sounds once and replay them when needed. This seemed like the obvious approach to me, but this often results in random delays playing the sounds.

I’ve found that on some systems, performance is better if a new Audio element is created and loaded (and played) each time a sound is needed. (Note: only reloading the audio did not make a noticeable difference. Creating a new Audio element each time is what made the difference.)

The code in the function runs immediately and allows you to avoid namespace collisions with other code. If the code includes (inner) functions, those functions are essentially private. In fact, they will go away if there are no references to them when the code completes.

People seem to prefer the above syntax, though the code is equivalent to:

// 2
(function () {
a bunch of code;
}());

When I first felt the need to do this sort of thing, I tried the more intuitive syntax:

// 3
function () {
a bunch of code;
}();

That is what one would expect to work. It makes intuitive and syntactic sense. It clearly creates an anonymous function and then invokes it. However, it causes a syntax error because, according to here, a statement that begins with the function token must be a function declaration (as opposed to an invokable function expression).

Noting first that the following works without complaint:

// 4
var discard = function () {
a bunch of code;
}();

I then arrived at this:

// 5
void function () {
a bunch of code;
}();

Note that void is a unary operator in JavaScript that discards its operand and evaluates to undefined. It is neither a function, as has been misreported, nor a data type, nor the absence of a data type, as it is in Java.

To me, #5 is more readable than #1 and #2. What I don’t like about #5 is that it’s still a bit cryptic. It may be cryptic, but I think it’s less syntactically disconcerting than #1 and #2, where I really don’t like the mysterious outer parentheses.

In case you missed it

Cases #1, #2, and #5 are equivalent. To me, #5 looks the simplest and the least prone to errors. I do wonder if there any insidious differences deep under the hood of any browsers.

Encapsulation? Or clean air?

This is a technique in JavaScript commonly used to avoid namespace pollution, and it’s often referred to as encapsulation, even though it’s really not. Booch defines encapsulation as “serving to separate the contractual interface of an abstraction and its implementation.” To this end, in Java and ActionScript, we have the interface. In JavaScript, we don’t.

I really have not been paying attention. Apparently there are various editions of Shakespeare’s works, with vastly different text among them, with no definitive versions. In fact, the earliest versions seem to be considered the least reliable. This is the opposite of how original publications of musical works are generally treated, where urtext (original text) editions are thought of as bringing us closest to the composer’s original intentions.

I grew up thinking I knew the following lines from Romeo and Juliet:

What’s in a name? That which we call a rose
By any other name would smell as sweet.

I heard this on the radio this past weekend as:

What’s in a name? That which we call a rose
By any other word would smell as sweet.

Wanting to verify my sanity, I quickly checked my bookmarked link to the Complete Works of William Shakespeare at MIT. By any other name. Sanity confirmed? Perhaps not. Google has this to say:

Beethoven vs. Shakespeare

At least in music, disparities like this get some attention. Beethoven marked the first movement of his fourteenth piano sonata (the Moonlight Sonata) to be played senza sordino. This brings up two very nonintuitive Italian markings in piano music:

Senza sordino: don’t use (or stop using) the soft pedal. The meaning is something like not muted and instructs you to not mute the sound, which nowadays means don’t use (or stop using) the una corda pedal, where the una corda pedal is also known as the soft pedal or left pedal.

Senza sordini: do use the damper pedal. The meaning is something like without mutes (or dampers) and instructs you to play without dampers on the strings, which effectively means to depress the damper pedal, which is also known as the right pedal. This term rarely occurs in music in favor of using a fancily scripted Ped. marking instead.

Note: Schubert uses sordini (i.e., short for con sordini, meaning with dampers) to mean don’t use the damper pedal. All these double negatives really make you stop and think! Brahms uses una corda (one string) and tre corde (three strings), which to me is shrouded in less obscurity.

Some argue that Beethoven misspelled (or perhaps miswrote) sordini as sordino (plural vs. singular). In fact, some Beethoven editions make the according “correction.” The Harvard Dictionary of Music concludes that Beethoven misspelled the word.

These pedal markings usually apply only to a few measures at a time within a piece. Beethoven’s marking has perhaps never been used at the beginning of another piece, where it seemingly refers to the whole piece. Some argue that Beethoven meant that the damper pedal should be held throughout the entire movement and would not have used a marking such as this (misspelled or otherwise) unless he meant something special, such as holding the pedal throughout the entire movement.

There are many levels to this controversy! They will be discussed forever and never resolved.

Words and grammar don’t really matter

Trying to find more information on the Shakespeare side of things, I found this website:

The goal

The goal is to create a simple web page (such as a résumé) that prints nicely, specifically at page boundaries. Being a Microsoft Word user, I think in terms of applying keep with next and keep lines together to key places. These concepts are covered nicely in the CSS specification:

Most browsers do not work

The spec has been regurgitated over the years in various forms, across hundreds, perhaps hundreds of thousands, of web sites, some claiming support exists in every major browser. Has anyone actually tried it in any browser other than IE?

Thoughts

One theory as to why some people are mysteriously satisfied with something that doesn’t work is that they didn’t actually try it (i.e., print to paper) and are satisfied simply putting the CSS in place, believing that it works, and not noticing that it doesn’t. Or perhaps they tried it only in IE, which is probably more true the farther back in time you go (IE7 is just over three years old at the moment).

Maybe this works for certain situations and not others (affected by printer drivers? I doubt it; affected by operating system version? I doubt it; I tried Windows 7 and Windows XP; affected by which HTML element is being applied to [e.g., table vs. h1]? perhaps; I only tried headings and paragraphs). Maybe older browsers used to work, and the latest browsers have stopped working (I doubt it).

Maybe some people are content with trying it and seeing that it doesn’t work and moving on to bigger fish to fry.

In June 2007, Microsoft famously fixed a problem with memory leaks in Internet Explorer 6. IE8 leaks memory worse than IE6 ever did, yet I haven’t been able to find any mention of it. I say worse because:

Problem statement

The memory used by IE8 to create certain DOM nodes is never freed, even if those DOM nodes are removed from the document (until window.top is unloaded). There are no closures or circular references involved.

This problem only occurs in IE8 (and IE9 and IE10 preview). It does not occur in IE6, IE7, Firefox, or Chrome.

In case you missed it: every image element, anchor element, etc. added to a page uses memory that is not reclaimed even if the element is removed from the page.

In practical terms

Think intranet web application rather than Internet web site—that is, something that you may leave running for multiple days. If you have a web page that never reloads as a whole but pulls over many updates from the server (e.g., images and anchors are updated Ajax-style), each update will leak memory. Note that update here means existing DOM nodes are removed and new DOM nodes are created.

Depending on the size and frequency of the updates, IE8 can end up using all available virtual memory, start thrashing within itself, and eventually failures occur; see below for more details.

The user hitting F5 occasionally (if the page can handle it) will unload window.top and free its memory, but there is no apparent programmatic workaround. If the page is in an iframe, unloading the iframe has no effect. It seems that window.top must be unloaded.

In terms of code

First example

Here’s the basic idea of the leak, which is to repeatedly execute the following:

Other notes

These examples leak in IE8 but not other browsers. I’ve tried IE8 on Windows XP, Windows Vista, Windows Server 2008, Windows Server 2008 R2, and Windows 7. I’ve tried IE in IE7 backwards compatibility mode, as well as every possible variation of quirks and standards mode. Though that’s by no means every variation of everything, it makes me comfortable that it’s not a fluke.

Live example

Here’s a link to an example page that shows the problem highly amplified. What I mean by amplified is that the job of the page is to show the leak, highly exacerbated; it has no other purpose. There are start/stop links on the page, so don’t worry about the page doing something terrible on its own.

Characterizing the leak

The memory size (called private working set on Windows 7) of iexplore.exe grows steadily over time, but this doesn’t lead directly to failures. IE memory usage does seem capped, and it does seem to be trying to manage memory from a DHTML perspective within certain constraints.

After it reaches its self-imposed cap, it seems to start thrashing within itself. Operations that could be performed at maybe 100 times per second begin taking over one minute per operation. Eventually, DHTML-style out-of-memory errors can occur.

In closing

I’m very interested in hearing about workarounds to this problem. I will post any meaningful updates as they are uncovered.

Update 2010-03-26

Microsoft made an IE9 preview available on 2010-03-16. It runs my leak test and does not leak. This is very promising. Additional update on 2010-06-24: We are now up to preview #3, and still no leaks.

There is a lot of memory churn in IE9 for the elements that leak in IE8, and there’s no churn at all for elements that don’t leak in IE8. The memory usage pattern in IE9 could be characterized as following a sawtooth pattern (low amplitude, high frequency), but with no long-term growth. I ran the test for an hour with no memory growth in IE9, whereas IE8 grew by about 1 GB in the same time.

Update 2010-09-16

Microsoft released the first beta of IE9 yesterday. The leak is back, at a rate of about 1 GB an hour: the same elements are leaking as before, at about the same rate as before.

Update 2011-02-10

Update 2011-03-14

Update 2011-04-12

Microsoft released IE10 Platform Preview 1. In terms of leaking, this is very similar to the IE9 preview: no overall leak, but a lot of churn and the sawtooth pattern is back.

Update 2011-06-29

Microsoft released IE10 Platform Preview 2. No leak. I suppose there is no SmartScreen Filter in these preview releases, which might account for the leak being absent from them.

Update 2011-09-13

Microsoft released the IE10 Platform Preview 3 as part of the Windows 8 Developer Preview. The leak is still there if the SmartScreen Filter is off; otherwise (the default), the leak is not there.

Update 2011-11-29

Microsoft released the IE10 Platform Preview 4 for the Windows 8 Developer Preview. The leak is not there, but I suspect the SmartScreen Filter is also not there; it’s hard to tell.

Update 2012-02-29

Microsoft released the IE10 Platform Preview 5 for the Windows 8 Consumer Preview. The leak is not there, and the SmartScreen Filter definitely is. This is the best it’s looked in years.

Update 2012-05-31

Microsoft released the IE10 Platform Preview 6 for the Windows 8 Release Preview. The leak is not there, and the SmartScreen Filter definitely is. This is the best IE has looked in years. And Windows 8 is looking pretty good, too (after going to the desktop and staying there, that is).

Update 2012-11-13

Microsoft released the release preview of IE10 for Windows 7. The leak is back, just as before: with the SmartScreen Filter disabled, the leak is present; with the SmartScreen Filter enabled, the leak is not present.

Update 2013-02-26

Microsoft released IE10 on Windows 7. The leak is gone! With the SmartScreen Filter enabled, the sawtooth pattern is there. With the SmartScreen Filter disabled, it looks even better: no manic memory usage and no memory growth.

Update 2013-06-26

Microsoft released an IE11 Preview with the Windows 8.1 Preview. I haven’t had a chance to look at it.