Contact

Monday, December 21, 2009

When I was an undergrad at MIT, the end of the fall semester meant I had about 5 weeks of freedom to hack on stuff before the start of the spring semester. At MIT, instead of starting classes back up in January, they have what is called the Independent Activities Period (IAP). Although IAP is optional, many students return for it (except the Hawaiians I knew, who generally decided to stay home in their tropical paradise during one of the coldest months in Boston) because there are so many great opportunities: you can get scuba certified, do research, or my personal favorite -- engage in programming competitions!

For you holiday hackers who haven't found the time to play around with Closure yet, I've tried to make it a little easier to get started by creating Closure Lite. Closure Lite is a single JavaScript file that you can include on a web page to start using a subset of the Closure Library. This is similar to the approach used by other popular JavaScript libraries such as jQuery. But as the Closure Lite documentation explains, although Closure Lite is a good way to start learning the Library, it is recommended to learn and use the Closure Compiler on your production JavaScript.

I hope that both Closure and Closure Lite are useful to MIT students who are competing in 6.470 this IAP!

If you do not feel like watching the entire presentation (it's a little over an hour), I recommend watching Parts 3 and 5. In Part 3, Bruce Johnson discusses GWT's JavaScript compiler (which is different from the Closure Compiler) and GWT 2.0's new code splitting feature. From the presentation, it sounds like the GWT compiler has added some of the optimizations that the Closure Compiler has had for some time. (I've heard of instances of the Closure Compiler reducing compiled code from GWT by 20%, so clearly there was room for improvement.) Historically, the GWT and Closure compilers were separate codebases because one was open-sourced and the other was not, but since that is no longer the case, perhaps we will see some convergence in the future. I wouldn't hold my breath, though.

But what was more impressive was the ease with which GWT was able to introduce code-splitting. That is, dividing up a large JavaScript file into smaller files, the majority of which get loaded asynchronously by the application as the features that depend on them are accessed. The Closure Compiler has support for such a feature, but it is undocumented and requires a bit of work by the developer, even if he knows what he is doing. The code-splitting feature in GWT 2.0 (introduced about 10 minutes into Part 3) is much more elegant and straightforward. I hope that the Closure Tools suite evolves to make this just as simple.

Then in Part 5, Kelly Norton introduces Speed Tracer, which is a Chrome extension that gives an unprecedented amount of insight into what Chrome is doing when it runs a web application. It is more similar to dynaTrace than it is to Firebug. Page Tracer is informative and so snappy that you might not believe the UI is written in HTML5 -- try it out!

However, my one gripe with the Campfire presentation is that you might come away from it believing that Speed Tracer works only with GWT applications, but that is not the case at all! Although its documentation lives under GWT on code.google.com and Speed Tracer was written using GWT, it can be downloaded and used completely independently from GWT. This morning, I installed it to explore the performance of some webapps I used to work on (which were written using Closure), such as Google Tasks. (I found some areas for improvement which I forwarded to the team.) I strongly recommend evaluating your own web applications using Speed Tracer as you may be surprised at what you discover.

Also, if you're like me, you may not notice the links to additional Speed Tracer documentation because they appear below the fold on the landing page. Under the Tools heading in the left-hand-nav, there are links to Hints, the Data Dump Format, and the Logging API.

If you stop and think about it, this level of tool support is essential for the Chrome OS initiative to succeed. If the browser is going to substitute for the desktop as a platform, then it must be fast (which is where Chrome comes in), it needs to have a kickass API (which is where HTML5 comes in), and it needs to have best-of-breed developer tools (which is where GWT, Closure, and Speed Tracer come in) so it is possible to build web applications that can compete with (and ideally exceed) desktop applications. When Chrome OS was originally announced, I was a naysayer, but now that more of the pieces are starting to come together, I'm getting a bit more optimistic.

I chose the title of this blog post very carefully: I do not claim that the Closure Compiler will always minify 85% better than YUI Compressor, and I wanted to make it clear that the dramatic results are specific to Closure Library code which is written in a style that was designed to be minified by the Closure Compiler. Although both tools can be used independently, they were meant to be used together.

That is not to say that other libraries cannot get dramatic minification benefits from the Compiler -- they can! But doing so requires following the style guidelines explained in the Advanced Compilation and Externs article on the Google Code web site.

And even if you think my test is unfair, or that I rigged it to butcher the YUI Compressor, I hope you can at least benefit from these two code samples that came out of creating these tests:

Apparently Kai has also found an inconsistency between the online documentation for GPolyline.fromEncoded() and its implementation, but I want to get someone from the Maps team to confirm before updating that.

Friday, November 6, 2009

I have been waiting months to publish this blog post. When I attended The Ajax Experience 2009, I saw a number of "misguided" things that people were doing with JavaScript.* Knowing that Google was in the process of open-sourcing the Closure Compiler and the Closure Library (I was working on the Compiler effort before I left), I wanted to get up and yell, "Stop -- you're all doing it wrong!"

But I didn't.

The Closure Library and Closure Compiler that Google released yesterday are game-changing. I have always heavily subscribed to Fred Brooks's No Silver Bullet argument, but it's possible that the Compiler will improve your web development processes by an order of magnitude. Errors that previously may not have been caught until manual testing can now be caught at compile time. Before the Closure Compiler came along, there wasn't even such a thing as "compile time" when it came to JavaScript!

The Compiler is complex -- it will likely take some time for developers to wrap their heads around it. It was hard to write JavaScript at Google without also thinking about what the Compiler would do to it. For sure, the Compiler will change the way you write JavaScript code.

To illustrate one of the things that will change, I put together a detailed article on inheritance patterns in JavaScript that compares the functional and pseudoclassical object creation patterns. Now that the Closure Compiler is available, I expect the pseudoclassical pattern (that the Closure Library uses) to dominate.

*I saw a number of innovative things at the conference as well, but none that would have the same impact as the Closure Compiler. However, there was one presentation at The Ajax Experience titled The Challenges and Rewards of Writing a 100K line JavaScript Application that revealed that a company called Xopus in the Netherlands had built their own JavaScript compiler. It is the closest thing I have seen outside of Google to the Closure Compiler. My only objection to it is that I believe that the input to their compiler is not executable on its own.

For example, slide 12 shows some sample code that calls Extends("com.xopus.code.Animal") which has no apparent reference to the class doing the extending, so it is hard to imagine how calling Extends() has the side-effect of providing Animal's methods to Monkey. Presumably, the compiler treats Extends() as a preprocessor directive whereas Closure's goog.inherits() will actually add the superclass's methods to the subclass's prototype. Closure generally relies on annotations, such as @extends, for compiler directives rather than function calls. Closure code should always be executable in both raw and compiled modes.

Note that the recommended way of extending compiler behavior is to subclass CompilerRunner (most likely createOptions() will be the method you are interested in overriding). Your subclass should have a main() method and be set as the Main-Class in a jar. Such a jar can then be used with the -c option in the calcdeps.py utility.

Thursday, November 5, 2009

Today Google released a suite of JavaScript tools that are used to construct massive web applications such as Gmail and Google Docs. The suite is named Closure Tools, and I think you'll find it superior to existing offerings, particularly with respect to minifying JavaScript. The post on the Google Code blog links to a ton of new code and documentation, so trying to determine where to start can be overwhelming.

I used all of these tools when I worked at Google, so there is a lot I could write about them, but for now I'm just going to try to convince you that the Closure Compiler is not your ordinary JavaScript minifier. There is a web version of the Compiler that you can play with to convince yourself that it is worth learning more about. If you go to the site, you will see the following code:

Using the Compile button to run the Compiler, try each of the options to the right of the Optimization: label when compiling the sample code. The compiled code appears in the first tab on the right side of the screen. Note that running with the Advanced option, the code is compiled down to:

alert("Hello, New user");

Now that's some serious inlining! Can your minifier do that? Probably not. To get the maximum benefit out of Closure Tools, you should fix your JavaScript code so that it compiles in Advanced mode. Reading Google's article on advanced compilation is the quickest way to learn what the Compiler expects from your JavaScript, and it should also give you an idea of what other optimizations the Compiler is capable of.

Wednesday, November 4, 2009

One of the things I planned to do after leaving Google was port an NES emulator written in Java to JavaScript. With a JavaScript emulator, it would be possible to play Nintendo games on an iPhone via a web page. (The alternative would be to write an emulator in Objective-C and package it as an iPhone app, but that would be unlikely to make it through Apple's approval process.)

One of the most striking things about JSNES is that it runs at full speed in Google Chrome, but barely runs on Firefox 3.5 or Safari 4. This makes me think we've been going about browser benchmarks all wrong -- I don't care which one can calculate the nth Fibonacci number the fastest, I just care about which one lets me play Contra! (Though to be fair, this probably has more to do with performance differences with a browser's <canvas> implementation rather than its JavaScript engine.)

So there I was, all ready to write all this JavaScript code to find out that it had already been done. Duplicating it seemed like a bad idea, but I still had the NES on my mind, so I started looking around to see what other advances in emulation had come along since I last played around with it a couple of years ago.

I knew Craig had bought a kit some years back which, after some soldering, made it possible for him to plug a real NES controller into a USB port. Fortunately, technology has improved and now you can buyan adapter with an NES port on one end and a USB port on the other. No soldering required! Since the "USB NES RetroPort" costs $19 and a used NES controller only costs $3, it seemed like a good idea to buy a couple of RetroPorts and a stack of controllers so I can just swap in new controllers when the buttons wear out.

I bought some hardware (the guy who runs retrousb.com was very helpful) and started playing with different emulators to see which ones would support my new controllers. I learned that many emulators do not let you configure your joystick directly; instead, you are expected to install JoyToKey to convert joystick input to keyboard input and then map your joystick to the key commands required for your emulator. Honestly, it worked fine, but I wanted an all-in-one solution.

On Windows, VirtuaNES worked quite well and had support for configuring controllers, but the web site was in Japanese, so it took me awhile to figure out how to do that. Once I confirmed that my controllers were working, I started looking at emulators for Mac because I wanted to rekindle my project from over two years ago of using a PowerPC Mac Mini as my NES emulation hub.

Emulator options are much more limited on Mac. I started out by looking for emulators written in Java, since those should be cross-platform. I took another look at NESCafe (which I reported did not support sound in 2007, but seems to now), but as far as I could tell, it only supported one controller, so that was a deal-breaker. Then I took a look at vNES, which seemed much more promising, except for this FAQ that claimed for reasons unknown, it would not work on a PowerPC Mac.

However, the code for vNES is open sourced under the GPL 3, so I thought I would take a stab at it. The bug turned out to be very simple (it seems like the type of thing that FindBugs should be able to pick out easily):

If you look carefully, you'll see that although there is a check to determine whether mixerInfo is empty, it uses mixerInfo[1] for no documented reason. When I looked at the mixerInfo array, I discovered it only had one value on my PowerPC Mac Mini, two values on my Intel Mac Mini, and nine values on my Vista Thinkpad! I changed the code to use mixerInfo[0] and all was well.

(Aside: Why is the code for every emulator I look at so messy? There are never any comments -- it makes me think one guy figured out how the NES worked and everyone else has just cloned it, so there aren't any comments because no one really knows what is going on. Also, all the classes are in the default package, there are println statements commented out all over the place, etc. In debugging vNES, I tried to clean things up a bit by putting the code in a com.virtualnes package, adding a build.xml file, and refactoring things so it could be run as either an application or an applet. I am making the zip with my changes to vNES 2.11 available on bolinfest.com. I would have tried to contribute a patch to vNES, but the Google Code project appears to be empty.)

Although I got vNES working on my PPC Mini, it was prohibitively slow. Since I had already been playing around with the source code, I considered trying to optimize it, but because of the "no comments in the source code" thing, I realized that could take days. Instead, I went back to Nestopia.

Richard Bannister's Nestopia is a solid emulator for the Mac. It runs at full speed on the PPC and looks great when output to my flatscreen TV. The sound works, both of my controllers hooked up via my RetroPorts work -- this is the real deal.

The only thing it doesn't do is take the path to the ROM as a command-line option, and this is what kept me up past 4am last night. You see, I want to build a Cover Flow UI on top of the emulator for selecting the game to play. To do that, I need to be able to programmatically open Nestopia with a particular ROM file.

If Nestopia were open source, I could have tried to fix it myself to support this feature. In the release notes bundled with Nestopia 1.4.1, the author notes

Martin Freij has generously agreed to license Nestopia to me under a closed-source license for the present. As soon as I have my API kit ready, a buildable version of Nestopia will be released with my shell library. The license for this has yet to be decided but most likely will be normal GPL with my shell excluded under section three of the license. This has been postponed repeatedly due to lack of time but will be released one day - honest!

That dates back to September 27, 2008, so I wasn't going to hold my breath waiting for the source to be released. Besides, from his list of projects, Richard seems to have a lot going on, so I can imagine that he doesn't have the time for this sort of thing.

Regardless, I want my Cover Flow! Because I couldn't change the code for Nestopia, I tried to automate it with AppleScript instead. This is when I should have put the coffee on. According to Google Web History, I did over 100 searches last night while developing my script.

The first thing I that I got to work (after much experimentation) was the following:

-- because the "Open Folder" dialog only deals with -- folders and not files, we put each .nes file in -- its own folder so we can open the folder and -- then reliably select the only item that comes up delay 1 keystroke (item 1 of argv) key code 36

-- use the down arrow to select the file and hit enter delay 1 key code 125 key code 36

-- go into fullscreen mode using the keyboard shortcut keystroke "`" using {command down} end tell end tellend run

The part of this script that is particularly gross is the logic with the "Open Folder" dialog. Nestopia displays what appears to be a standard "File Open" dialog, but I could not, for the life of me, figure out how to script it. As you can see, I resorted to using key and mouse events to type in the value I wanted, and had I relied on this, I would have had to have an individual folder for each ROM because Finder (at least on 10.4.11, which is what my PPC Mini runs) lets you type in folder names, but not path names. If there are any AppleScript masters out there, I'd be very curious to see how else you would do this.

Unfortunately, it was not until hours after I started this project that I rediscovered the code I wrote in 2007. Back then, I wrote a CGI script in Perl which would build up some AppleScript and run it from the command line. At the time, this was the easiest way to send a command via HTTP to my Mini to kick off Nestopia:

#!/usr/bin/perluse CGI qw(param);

# let's get this out of the way before we forget!print "Content-type: text/html\n\n";

One thing that you'll notice is that file paths in Finder are gross. I ended up doing the of folder thing because that was the code Script Editor produced when I used Record to help figure out the AppleScript I needed to write. The one good thing about this script, however, was that it reminded me that simply opening the file would trigger Nestopia because it is the application associated with ROM files on my Mac. This helped me clean up my current script considerably:

on run argv tell application "Finder" to open file ((POSIX file (item 1 of argv)) as string)

-- go into fullscreen mode using the keyboard shortcut delay 1 keystroke "`" using {command down} end tell end tellend run

It took me at least half an hour of Googling until I came across a solution for passing in the file path as an argument. Apparently AppleScript only deals with HFS paths instead of POSIX paths like everyone else. It is particularly frustrating that Script Editor allows you to write POSIX file "/Users/bolinfest/drmario.nes", but as soon as you compile the code, it becomes file "Macintosh HD:Users:bolinfest:drmario.nes". What kind of editor rewrites your code into some kind of unmaintainable equivalent when you compile it?

I'm exhausted, so I haven't even started working on the Cover Flow part of the project yet, but at least I've resolved one of the big issues. It looks like there are working examples of Cover Flow UIs in JavaScript, so I will likely set up a web server on my Mac Mini with a similar CGI script that will shell out to my compiled AppleScript to launch the ROM. That way, I'll be able to browse my NES catalog from Safari on my iPhone and kick things off from there!

Wednesday, October 21, 2009

Though I'm not particularly fast, I would still call myself a runner. Many non-runners wonder how runners can possibly entertain themselves over the course of a 5-mile run. To be honest, I don't normally remember what goes through my head, but today it was trapezoids.

Specifically, given trapezoid ABCD (where AB || CD), how can you prove that CA + AB + BD is greater than CD?

It seems obvious that the sum of the other three sides should be longer (even if CA and BD are really short), but that doesn't mean it doesn't merit a proof! Drawing a single diagonal lends itself to a simple proof using the triangle inequality:

This yields the following two inequalities:

(1) CA + AB > BC(2) BC + BD > CD

If we add BD to both sides of (1) we have:

(3) CA + AB + BD > BC + BD

Combining with (3) we have:

(4) CA + AB + BD > BC + BD > CD

So this proves that CA + AB + BD > CD!

I was thinking about this while running around the Charles River this morning, wondering whether it would ever be shorter to run past a long bridge (CD) to cross the river at a shorter bridge (AB) and then come back, but now we've proved that is never the case!

I only used a trapezoid because that more closely matched the geometry of what I was running, but I think it's trivial to extend the proof to any quadrilateral (though ones that are not convex may not cooperate).

Monday, October 12, 2009

Last week, I made a slight update to my blog. Specifically, the following disclaimer has been removed: "This is my personal blog. The views expressed on these pages are mine alone and not those of my employer." No, this is not a change in Google's employee blogging policy -- two weeks ago I decided that October 8, 2009 would be my last day at Google.

After 4 years, 2 months, and 1 day, it was finally time for me to move on. I remember spending hours on The GLAT and other puzzles Google advertised in MIT's student newspaper, hoping they would somehow increase my chances of getting the one SWE job I wanted so badly. When I finally scored an on-campus interview during my MEng, I read at leastthreebooks on software interviews and puzzles because I wanted to be prepared for anything.

Somehow I made it through the process and was given a job offer as well as the choice of which office to join. Without hesitation, I chose Mountain View because I wanted to be in the thick of it at Google HQ. On the survey we got asking about which areas we would most like to work on, I checked off a handful of boxes, but also wrote: "If Google is working on a calendar product, then I would prefer to work on that." On my first day of work, not only did I learn that Google had been stealthily working on a calendar product, but that I was going to be responsible for building much of its UI -- working at Google really did seem like a dream come true!

Eight months later, that little calendar product finally launched (which was my one, and only, all-nighter at Google). Shortly thereafter, we moved the company off of Oracle Calendar and onto GCal (and there was much rejoicing!). I really loved working on Calendar, but somewhere in there I had decided to move back to the east coast and because someone else had simultaneously come along and decided it was important that projects not span across offices, it was really hard for me to keep working on Calendar.

To solve that problem, I moved offices again, this time to the other side of the world in Sydney. There I met many friendly, upside-down people whose curious idiomatic expressions made their way into my weekly snippets. As a bonus, I also got a front-row seat to the inception of Google Wave (which was an exciting mix of rebellion, chaos, and AJAX).

But whatever, I wasn't distracted by the reinvention of communication because I was busy working on Tasks! It was sort of like working on Calendar in that it was the #1 feature request since it had launched, but it was a lot more like working on Gmail because that's what we actually integrated with first (you'll notice we did manage to take care of that Calendar integration thing later). Tasks was fun, but exhausting. One of the many things Google has taught me is that building simple things is often extremely complicated and Tasks was no exception. (I think I've spent at least one man-month trying to figure out the best way for the cursor to move up and down between tasks, but that's a topic for another post.)

After we had integrated Tasks with almost everything (Gmail, Calendar, iGoogle, iPhone/Android, XHTML mobile phones), I decided that I should take all of the knowledge I had amassed working on Apps and contribute it to the greater Apps good, so I joined the Apps Infrastructure team. I like to think I made some good contributions there, but only time will tell if the things I put in place will last.

It's only been a few days since I've left, but now that I'm on the outside, I already feel like a boy standing on his tiptoes, reaching up to to the display window of a toy store, desperately trying to get a glimpse of the exciting things that lie within. Every time I hit a lapse in web surfing, my first inclination is to open a new tab to check my work email, but I then I glumly hit ⌘-W and go back to whatever I was doing.

It's not so much that I miss going to work, but the feeling of being in the know. One of the main reasons I decided to leave Google was misgivings about it becoming such a big company, though admittedly, so far Google has continued to do innovative and disruptive things, such as the YouTube Symphony Orchestra and Chrome Frame. I just miss being able to preview what's in the pipeline.

At Google, I was able to work with many sharp people, launch cutting-edge products, and reach millions of users. My career there took me around the world, and my experiences there have shaped me as an engineer. I am grateful for my adventures at Google, but like college, after 4 years [2 months, and 1 day], it is time to graduate and move on.

Sunday, April 5, 2009

If we let the radius of the inner circle be r and the length of segment BC be s such that the radius of the outer circle is r+s, then we can express all the quantities that we are interested in in terms of r and s.

When comparing each pair of terms, the ½πr cancel each other out. Because s is a length, we know that s>0, which means that ½πs is always greater than s, but is always less than 2s. In terms of answering the quantitative comparison questions, this means the answer to the first question must be A and the answer to the second question must be B.

Put another way, if you are walking around a 90 degree arc (such as from point C to point E), it will always be faster to walk around the outer arc than to walk to an inner arc, traverse it, and return to the outer arc. What is particularly interesting is that this is true regardless of the radius of the inner arc.

Now before you start taking the outside edge of every curve, note that this only holds true for arcs less than a certain size. For example, what if it you were traversing a semicircle instead?

The length of the outer arc would be:½(2π(r+s)) = πr+πs

The length of the two straightaways plus the inner arc would be:s +½(2πr) + s = πr + 2s

Because πs is greater than 2s, it makes more sense to walk to the inner arc when walking around a semicircle! So where is the breaking point? This can be determined by letting x be the fraction of the circle to traverse and setting the two path lengths equal to one another and solving for x:

I thought that this would make a great SAT question for several reasons. First, and most importantly, this is a quantitative comparison question that does not have any numbers, yet its answer is not D! I'm pretty sure that most students hated geometry (so sad!), so when they come to something like this, it's pretty easy to throw up one's hands and declare the question unsolvable. Of course such a student would be incorrect...

Also, this question is resistant to some common SAT-solving techniques. For example, for geometry questions where the figure is to scale (such as this one), The Princeton Review will tell you to make small marks on your answer sheet so you can use it as a ruler, but because this problem involves arcs, that is of little help. Another common SAT tip is to redraw the diagram, exaggerating r or s, but if you play around with that, you can probably convince yourself that there are cases where one quantity is larger than the other, but it's hard to say conclusively that it holds true for arbitrary values of r and s.

That means the only ones who will be answering the question correctly are those who can do the math. Isn't that how it should be?

Tuesday, March 24, 2009

Each calendar will be updated automatically – I have a script that runs twice an hour while baseball games are on (it gets a break from 3am-noon eastern time) to scrape the latest data and update all of the calendars. Unfortunately, I don't think the calendars will be re-indexed that quickly by Google Calendar, but it's better than nothing.

I actually wrote Chickenfoot code two years ago to scrape data from mlb.com for use with CalMap and wikicalendars.com. Unfortunately, I never quite got Chickenfoot working with cron (which meant I manually ran the Chickenfoot script every morning to update the calendars), so this year I actually decided to run my original JavaScript code using Rhino since it's easy to kick off a Java process using cron. It sounds a little crazy, but it works great!

Monday, March 23, 2009

My Web Content Wizard is fixed! Something must have changed on Google's side – I have not touched the code for the Wizard in years, but users have been writing in for some time now asking me to fix it.

I remember that setting up AuthSub the first time around was pretty miserable, so I was reluctant to sit down and try to debug it. I'm not sure what went wrong, but visiting https://www.google.com/accounts/ManageDomains to register bolinfest.com appeared to fix the problem.

Sunday, March 22, 2009

The SAT I math test has a special class of questions called Quantitative Comparisons. Each question shows two columns, A and B, each describing some quantity, but the same four choices are always the same:

A. The quantity in Column A is greater.B. The quantity in Column B is greater.C. The two quantities are equal.D. The relationship cannot be determined from the information given.

These questions can be devilishly tricky (particularly when the answer is D). Consider this classic example:

x > 0

Column A

Column B

2x

(2x)2

Many students will only consider integer values of x and choose B as their answer, which is incorrect. What if x is ¼? Then the value of Column A is ½ and the value of Column B is ¼, so Column A is greater! But if x is any value larger than ½, then Column B will be greater. This means the correct answer must be D. (I deliberately chose 2x instead of x to eliminate the case where trying x=1 makes the two columns equal and trying any x>1 makes Column B greater, which makes it easier to come up with D as the right answer.)

So that is a quantitative comparison question. Like many upper-middle-class kids from the northeast, I took the SATs for the first time in the 7th grade to try to get into CTY. I didn't know what CTY was at the time, but my mother did, and fortunately for me, she recognized what a tremendous opportunity it would be for me to go. Mom bought me my first copies of 10 SATs and Cracking the SAT that year. (I say "first copies" because I had to buy new ones the following year when they re-centered the scoring and changed the test.) The SAT became my new challenge, and I've been kinda obsessed with the thing ever since.

Which is why while pondering the most efficient route on my walk to work, which involves walking around an arc (a rarity in New York City), I have developed the following Quantitative Comparison question:

CE and BD are 90° arcs on circles which are concentric at point A. Segment AC is greater than segment AB.

Column A

Column B

The length of arc CE.

The length of arc BD plus the length of segment DE.

And also the following quantitative comparison question for the same diagram:

Column A

Column B

The length of arc CE.

The length of segment CB plus length of arc BD plus the length of segment DE.

I'll write a follow-up post with the answers in a couple of days. I know I found the result surprising (and now I know the best way to walk to work)!

P.S. If you have that problem where the popup with the arrow pointing to Tasks appears every time you log into Gmail: (1) I apologize, and (2) if you click on the Tasks link that it is pointing to, the popup should never show up again. Tell your friends!

Monday, February 9, 2009

Tonight I decided I should blow the dust off the machine I got from Dell a couple of years ago so I could achieve my longstanding dream to have a giant programmable lightswitch. I've lived in at least six apartments over the past two years (protip: keep track of old addresses by checking where you've shipped stuff on Amazon), so I've never been in any place long enough where it was worth trying to set this up. With my 13-month lease in hand, it seemed like it was finally time to try this out.

The key elements that I've acquired along the way are a Dell touchscreen monitor (2005), Windows XP machine (2007), and the X10 ActiveHome Pro 9 Piece Kit (2004?). My plan for stringing these things together seemed pretty simple:

Plug monitor into computer

Install ActiveHome software

Press giant virtual button

Simple, right? Actually, it had far fewer snafus than some of my other projects. Step one was to get the touchscreen working even though I didn't have the CD with the drivers. Despite my repeated Googling, the drivers were pretty hard to find until I discovered that somehow I was being sent to Dell Australia instead of Dell US (work VPN playing games, maybe?), but once I got that straightened out, I got the monitor live and responding to touch pretty easily.

The next step was to get the ActiveHome software running. Fortunately, I backed up the installer for the SDK on bolinfest.com a few years ago because I figured I would lose it otherwise (I was right!). Installing the SDK was no problem, but I couldn't seem to find the thing that launches the ActiveHome GUI pictured on the web site. This wasn't a big deal because the SDK comes with sample code in a bunch of languages, including JScript, which you can just load in IE if you allow ActiveX to do its thing. I even got the JScript sample to stop giving me ActiveX warnings every time I loaded it by sharing the folder and adding it as a Trusted Site. (The Trusted sites thing in IE doesn't let you trust local files unless you serve them from as a shared folder -- I'm wary of what I've opened myself up to as a result of this.)

Now I was up to step three: "press the big red button." I pressed. I pressed again. I pressed several more times and exclaimed some things I do not care to repeat. My X10 components had been working fine via the remote from the Kit until a few weeks ago when I bought some new components (uh-oh) and tripped a circuit breaker while trying to install a Socket Rocket in my apartment. I was hoping that maybe the x10 modules would respond better to the software than to the remote (what the hell do I know?), but obviously they did not, so I checked out one of the x10 troubleshooting pages. It said something about opposite-phases-blah-blah-don't-you-know-I-got-Cs-in-my-EE-courses-because-I-couldn't-be-bothered-to-go...

I really had no idea what I was reading, but I made up this hypothesis that tripping the breaker had knocked things out of phase and that turning it off and on again would magically realign my outlets. Believe it or not, this actually seemed to work! That was, until I realized that once I plugged anything else into the outlet next to the one that had the x10 module plugged into it, the x10 module would stop working. This is a problem since I don't have too many options when it comes to outlets in my apartment, so I'm not sure how I'm going to resolve this. Currently, I have to choose between the possibility of controlling a lamp via my iPhone and plugging my laptop into the wall. It's a toss-up.

Since I've come this far, I'm strongly considering reworking the outlet situation in my apartment to accomodate my fetish for automated lights. If any of you circuit jerks out there can tell me how to do this without making a trip to the electrical section at Home Depot, I'd be much obliged. I only have 10 months until it's time to setup the AIM bot to control my Christmas lights again!*

*I actually did that in 2005 using the SDK and an existing AIM bot library written in C#. I went with the AIM bot because it seemed a lot simpler than running IIS. It was also more fun, though I guess nowadays I'd do it over Jabber instead. When was the last time I logged on to AIM?

Sunday, January 18, 2009

There have been times when I wanted to create a JavaScript widget that I could embed into any web page. One problem I ran into was that I would develop my widget in a standards mode page, but then it would not work when embedded on a page in quirksmode.

Either way, this whole doctype thing is a problem. On a web page, the document has a doctype property, but it's read-only, so trying to edit that is a dead-end. Originally, I tried manipulating the content of an iframe with src="about:blank" which I thought would be clean because it had no content, but no content means no doctype, so it is also stuck in quirksmode.

One option that does work is to have an empty html page with the strict doctype on the same domain as your JavaScript widget. That way, your widget can write its content into a local iframe using the empty html page as its source. The drawbacks to that approach are: (1) it complicates the API of your widget because it requires the consumer of your API to make a server-side change and bake the URL into calls to your API; and (2) you have to add some extra logic to listen for the iframe to load before you can write to it, so it forces your API to become asynchronous rather than synchronous.

The best solution I've seen so far is to dynamically create an iframe with no src attribute and to use document.write() to insert the content of the iframe or to set the src attribute to javascript:parent.functionThatReturnsTheDesiredContentAsHtml(). This test page demonstrates both techniques.

One thing that is particularly advantageous with this technique is that if your widget requires its own stylesheet, it can load it in the iframe instead of the top-level page where your CSS class names run the risk of conflicting with the CSS class names used in the host page. So even if you know that your widget is going to be used in a standards-compliant page, you may want to use this technique to create a "fresh namespace" for your CSS.

The only disadvantage that I've found with iframeing over inlining is with managing overlays that need to appear outside the iframe. In the test page, there is a green box that appears over the iframes and the main page. For it to be visible, it needs to be an element of the top-level page, which (1) may be in quirksmode and (2) will not honor the CSS rules defined in your iframe. That means that you may have to design your popup so that it works in both rendering modes and uses inline styles (exactly what we were trying to avoid with our universally-embeddable JavaScript widget!).

Also, the JavaScript code that manages such an overlay needs to be wary of its use of the document variable as it is important to use it in the correct context (iframe vs. host page). It is best to create different getters for the two documents and to use those exclusively, avoiding direct access to document altogether.

Overall, I am still pretty happy with this solution, though one thing that occurred to me is that this is a little frightening with respect to phishing in that a malicious web page could easily display an iframe to foreign content (that is familiar to you) and then display its own login box or credit card form on top of it to lure you to enter your information. I mentioned this to Mihai, and he said this is somewhat of a known issue, citing this example at zombieurl.com (warning: page contains sound). I guess that goes to show that you can never be safe from zombies, not even on the Internet.