Tuesday, 27 February 2007

I think it's actually a really good article, with just one mistake where I think I confused the reporter --- we're not the second development team outside Japan, of course, but the second outside North America, Japan being the other. I'm pleasantly surprised because my past interactions with the media have sometimes been really awful, and computer news often gets mangled by journalists who don't seem to understand what they're writing about. But this one is great. Thanks Ulrika!

Monday, 26 February 2007

There's a lot of confusion about what faith is. A lot of people use it to mean simply "irrational belief", or even "counter-rational belief". This is confusing because I think it's not what Christians mean when they talk about faith. More specifically, it's not what the Bible means. For example, Abraham is described as a paragon of faith, even though God appeared to him and spoke to him several times. Abraham's faith wasn't grounded in ignorance or uncertainty.

Now faith is being sure of what we hope for and certain of what we do not see.

and goes on to list "heroes of faith", including Abraham:

By faith Abraham, when called to go to a place he would later receive as his inheritance, obeyed and went, even though he did not know where he was going. By faith he made his home in the promised land like a stranger in a foreign country; he lived in tents, as did Isaac and Jacob, who were heirs with him of the same promise. For he was looking forward to the city with foundations, whose architect and builder is God.

Then later

They did not receive the things promised; they only saw them and welcomed them from a distance.

What Abraham "hoped for" and "did not see" was not the existence of God himself, but the eventual fulfillment of God's promises to him. That's what his faith was about.

Why is that commendable? If Abraham had been some kind of Mesopotamian Vulcan, logic would dictate that after God appeared to him the first time, following God's instructions would be the only reasonable course of action forever after. But of course it doesn't work like that for anyone, because we're all weak. Everyone, Christian or not, has times when we clearly know what we should do but simply lose heart and capitulate. For Christians, this means that we simply don't trust God enough. We may intellectually be convinced that he exists, that he loves us, and all that good stuff, but we effectively decide in our hearts "I'd better take my pleasure now because I'm not sure obeying God is a net win".

Faith isn't just something we have in relation to God either. I'm firmly convinced that working on Firefox is the right thing for me to be doing, but sometimes I don't feel it, and I'm tempted to skive off and spend time on something else. I honestly profess that everyone will be happier in the long run if I am a good husband and father, but there's always a temptation to be a selfish beast, even in situations where I know that it's going to hurt me in the long run. I need faith in my original convictions in order to get me through those temptations, to live in light of that unseen reward, whatever it may be. This faith is more about trust and loyalty than "mere belief". Ironically, in this sense faith is particularly rational.

Thursday, 15 February 2007

Chris has been working on adding offline support to Zimbra using Firefox's nascent offline capabilities. The goal is to test our APIs in the context of a large complex AJAX application, to make sure they work and are a good fit for the task He's made considerable progress and just posted a nice demo of reading email offline in Zimbra.

Ironically I think it works so well that the demo is not very exciting. You go offline, everything continues to work --- ho hum, move along, nothing to see here. Boring perhaps, but it's exactly the right user experience.

Note that two of the pieces already have WHATWG specs, and the jar: URI scheme is already a de facto standard used in a variety of products. The only really new part is using <link rel="offline-resource"> to put resources in a persistent "offline cache" that won't accidentally evict your application. That is quite simple and once we're confident we have the right semantics for it, we'll definitely try to get it standardized somewhere. We're about pushing the Web forward, not just Firefox, and I hope other browsers support these APIs ASAP if they prove popular.

Putting these pieces together, Web application authors can add offline support to their applications in a very incremental way. The first thing you do is collect all static resources deployed to the Web server that you'll need offline and arrange for them to be loaded into the user's offline cache. For performance and consistency reasons, and also to minimize the number of tags required, your best bet is to roll them all into a single JAR file and make that the offline resource. (The "start page" also needs to link to itself as an offline-resource to ensure it will be available offline.) Now, when the user visits your page and signals that they want the application to be available offline, the resources will be automatically downloaded and stored. (The exact UI for this is yet to be determined; I've proposed having "bookmark the page" be that signal.) On each subsequent visit to the start page, Firefox will revalidate the cached resources and update them if they've changed on the server. This should be enough to ensure your application can at least get started while the user is offline.

Most Web apps need to talk to some server. When the browser is offline, they can't. It's up to the application developer to detect offline status using the provided APIs and decide how to behave while offline. Some functionality, such as instant messages, should just be disabled. In other cases, where the app would normally retrieve data from the server, it instead might grab some previously preloaded data. This would work well for email, calendar, CMS, and other applications. Of course the data has to be stored somewhere, and cookies won't cut it, which is where WHATWG client-side storage comes in (or another client-side storage solution, such as Flash's, if you prefer). Similarly, instead of sending data to the server, it will have to be queued and sent later when the user gets back online.

This approach to offline Web apps has some major advantages. The user isn't required to download and install anything. In fact, no trust decision is required; the application remains sandboxed exactly as if online. In fact, there's potentially no new UI required at all. The footprint requirements are small. The offline user experience is very smooth; the address in the URL bar remains the same, and the same bookmarks or shortcuts will access the application. Dynamic online/offline switching is supported. It's very incremental; there's no new tier you have to suddenly add, and no new programming model. AJAX applications tend to move logic from the server to the client and this fits right in with offline support.

Of course, this approach is also not perfect. The main issue that I see right now is that some server-based functionality, such as full-text search, may just not perform well in the browser. (However, there may be ways around that, such as uploading an index to the client, or doing the computation in a Java applet loaded from the offline cache.) I'm glad to see other approaches growing up, such as the Dojo Offline Toolkit; hopefully we'll have a range of solutions to fit all needs.

There are some things we're deliberately not doing. People have asked for richer client-side data models such as SQL or some sort of automatic synchronization. I think it's premature to expose APIs for that sort of thing (even though it would be easy for us to expose SQL, since we already embed SQLLite). I'm skeptical that we can choose a synchronization model, or even a SQL dialect, that suits everyone. SQL also has the problem that exposing SQL to untrusted code could have all kinds of difficult-to-forsee consequences. I think if we try to choose a solution too early, we'll choose something inappropriate for most applications and it will become dead weight that everyone works around. For now I think we need to watch and wait and see if application needs start to converge. Of course, Javascript libraries are also an option.

One API that we are thinking about adding is support for script-controlled loading of resources into the offline cache. This needs to be designed in conjunction with a policy for deleting resources from the offline cache, something we haven't settled on yet.

I'm really excited about seeing this taking shape. Thanks to Chris and Dave Camp, who's doing the offline cache work on the Gecko side.

It seems to have a problem with bidi overrides and trailing whitespace. If I give it the string RLO space space PDF (namely, two spaces inside a "right to left override" Unicode control), ATSUI produces glyphs with the first space to the left of the second space. This is wrong.

The general problem seems to be that bidi overrides are ignored for any trailing whitespace in the layout. You can even see the problem with a left-to-right override, for example when the whitespace follows a Hebrew character.

So far I haven't found any instances of this bug affecting non-whitespace characters, but it's possible I just can't see such effects, not being able to read any RTL langauges.

I'm able to work around this problem, by detecting when glyphs aren't in the order I expect and processing them in the correct order, but it seems like a pretty major bug. Unfortunately there doesn't seem to be much information on the Web about using ATSUI with bidi overrides. Most of the hits for my searches are actually Webkit checkins. (Webkit seems to use the same trick I'm using with ATSUI and Pango to force all characters to follow a certain direction: insert a RLO/LRO header character and a trailing PDF character into the text before handing it to the text engine.)

Tuesday, 13 February 2007

I spent most of today wrangling ATSUI for the Mac textrun code. It turns out that the the information I need --- glyph IDs, advances, and horizontal and vertical offsets --- is only properly obtainable by registering a "post-layout callback" and reading that information within that callback. And it's not clear exactly when that callback is invoked. Also, I still haven't figured out how to get ATSUI to tell me the "image rect" (a.k.a. "inked rect", a.k.a. "glyph extents") for all glyphs in a layout. That's a big problem; we don't want to call ATSUMeasureTextImage a bazillion times. Working with complex frameworks is always tricky...

Another thing that's come up is the realization that a list of positioned glyphs is not necessarily going to be enough to render complex text with all possible features. ATSUI, at least, can apply various transformations to glyphs when rendering them. We can actually handle this with our textrun abstraction, by stashing the original ATSUTextLayout in the textrun when necessary, but it's not clear how we'll be able to detect when that's necessary...

Another really important issue is when to use ATSUI and when to bypass it for faster APIs. On Linux we have the compile-time option to bypass Pango for 8-bit-only text, which is currently enabled. (We also have the option to bypass Pango for all text, but the results are disastrous in many cases.) My current plan is to wait until we have textruns and the new textframe running reasonably on all platforms and then measure the performance impact of bypassing ATSUI, Pango and Uniscribe. Right now it's hard to predict what that impact will be, since we use textruns in a really stupid way --- usually we're creating one textrun (and hence one ATSUTextLayout/PangoLayout/etc) per word. With the new textframe we create one per paragraph (in the absence of font changes), which is much much more reasonable.

It would be most desirable to always use the Pango/ATSUI/Uniscribe path, because they give higher quality results. It would also improve consistency; it's a problem if adding one character to the end of a string changes the rendering of the whole string. We'll see how it goes.

Thursday, 8 February 2007

There are a lot of people around the world perpetrating evil through their stupidity, selfishness, thoughtlessness and malice. But only rarely do we see someone almost single-handedly ruin an entire country without even ignorance as an excuse. Robert Mugabe's doing it. As I recall, Zimbabwe was once doing quite well by African standards, with functioning institutions and a decent economy. Now it's spiraling down into a hellhole quite efficiently thanks to Mr Mugabe ... who blames everything on a mysterious Western plot. It enrages me every time I think about it.

What's even more disturbing, in some ways, is that the South African government continues to befriend him, apparently due to old revolutionary loyalties. This doesn't bode well for South Africa itself.

This was a monumental effort spanning two years from posting the first patch to finally landing. The original design was proposed (by me, as it happens) back in 2002. Congratulations to Eli for pushing this through to resolution.

This patch is a major cleanup of the way we work with length units in Gecko. The new design is very clean and gives us some important new capabilities. The design revolves around CSS pixels. Think of a CSS pixel as 1/96 of an inch, rounded to the nearest device pixel, assuming the user is looking at a typical desktop screen. It's an abstract measurement that we use to make sure Web sites using measurements in "px" look reasonable. In particular, if the user has a (say) 200dpi screen, we don't want to set one CSS pixel to be one device pixel (as we currently do), because Web pages will simply look too small.

The new design focuses on three clearly defined quantities:

The number of "application units" per CSS pixel. We make the "application units" we use for internal layout smaller than CSS pixels because we want to support subpixel positioning of elements. This value is device-independent and currently set to 60.

The number of device pixels per CSS pixel. This is device dependent and depends on the device DPI. For devices up to 144 DPI, the value is 1. For 144 up to 240 DPI, the value is 2, and so on, increasing by 1 every 96 DPI. This means that effectively on a 144-240 DPI device, everything is specified by CSS pixels is scaled up by a factor of 2.

The number of device pixels per inch. This is just the device DPI. This value is used to convert CSS length-based measurements (e.g., CSS "in" units) to device pixels and then to application units.

The net results are that if you have a high DPI screen, trunk builds will be scaling everything up by a factor of two or more, including Firefox UI. This is basically a good thing except that for now, there are some bugs that make it hard to use. If you can't bear it, you can use about:config to set the preference "layout.css.dpi" to something like 100 to turn the scaling off. (For a good time, you can use this preference to test the scaling feature if you have only a regular screen.)

Also, whatever your DPI, we now support much better matching of CSS physical length values to your actual device. If you draw a 1in x 1in box on the screen, it should really be a square inch if your system DPI is set correctly or you set the preference in Firefox. Up until now it has been a rather poor approximation.

Another benefit of this patch, by the way, is that scaling in printing and print preview has been reimplemented so that you can use any scale factor you like. Up until now only we've only been able to scale at certain mysterious predefined ratios.

This is not the end of the story about scaling, however. User-controlled zooming into Web pages is a separate feature that we will implement for Firefox 3.

For people working with Gecko layout code, this means that the old pixels-to-twips (and back again) conversions have been replaced with new conversion functions in nsPresContext, that convert between application units, CSS pixels and device pixels. Whenever you work with pixels you need to know which sort of pixels you're dealing with. In lower level code without access to an nsPresContext, you can get similar information from an nsIDeviceContext. Note that because the application units to CSS pixels ratio is device independent, you can access it via static methods of nsPresContext and nsIDeviceContext.

Tuesday, 6 February 2007

Now that Chris Double has decloaked, I pleased to be able to report that he's the first local developer (other than me of course) to be contracted by the Mozilla Corporation. I'm really excited to see this happening. I've really enjoyed working with Chris so far. We're working out of a room of my house right now but I have signed a short-term lease on an office for us in the the city.

Chris is working on porting Zimbra to use our offline support --- both to get a compelling demo going, and also to test out our offline model. He's already found a few issues (which we've fixed).

I'm not sure how big or how fast we'll grow here in Auckland, but we definitely expect to get at least a little bigger shortly. It will partly depend on finding the right people --- or them finding me!

Monday, 5 February 2007

Baa Camp ended yesterday. It was a whole lot of fun and very educational. I'd definitely go again if I got the chance. Some random thoughts...

I gave two presentations. One was on time-travel debugging, and got some good comments. The other was on Firefox of course, discussing and demoing some of the new stuff in Gecko 1.9. Unfortunately my Macbook crashed early on so I didn't get to show most of the demos (boo! hiss!). Nevertheless the audience (mostly Web developers) seemed keen, especially about the offline application support. This was by far the most highly anticipated new Gecko feature.

I gave my presentations early in the camp ... I selfishly like to do this because it maximises the likelihood that people I talk to will already know who I am and what I'm doing, or actually seek me out to talk to me.

I had some good followup conversations with people. I particularly enjoyed talking to Lars, the Google Maps guy from Sydney, who had a number of wish-items and was very interested in offline support. (Contrary to some blog reports I did not promise to implement all of Lars' requests! :-) ) I also really enjoyed talking to some guys from Shift, a local Web design company. We had good discussions about CSS, typography and AJAX, especially about how they use these when building real sites. This information is very valuable to me --- for example, it has implications for the units changes, and I think they've convinced me to change our approach slightly. I don't have time to keep up with the wide world of the Web design community so I may want to go and talk to them from time to time; fortunately their office is close to where my office will be.

I went to Nigel Parker's XAML/WPF presentation to see what the Microsoft pitch looks like. As expected it is very impressive. We definitely have a lot of work to do if the Web is to keep pace. On the other hand, the relationship between WPF and the Web is a difficult issue for Microsoft both politically and technically. For example, Nigel showed a XAML RSS reader, which was very nice except that if you have HTML markup in your RSS enclosures, as a very large number of feeds do, it doesn't know what to do. Anyway, Nigel gave a good presentation. He can also really slog a cricket ball!

The session with David Cunliffe about broadband was very good. It was actually an amazing feeling to be in a small classroom with a senior Cabinet minister, a diverse and (politically) unscreened audience, and no handlers or bureaucrats in sight, having a frank discussion of the issues. David really understood the issues, too, and seemed to take on board the primary message people were pushing (namely, that local loop unbundling is good, but it's equally important to have good local peering arrangements, which fortunately should be a lot simpler to resolve than the unbundling). It makes me feel optimistic about politics in this country --- one of the advantages of being a small country, perhaps.

It was amazing how much of people's energy was focused on the Web. Maybe that reflects Nat Torkington's selections, but it was pleasing in any case. It was also pleasing to see how much mindshare Firefox has in this space --- almost all of it. A number of people were building Firefox extensions, and Web designers were telling me how they design in Firefox and then backport to IE7. It almost made me squirm, it felt over-the-top. Nevertheless we mustn't take anything for granted.

I ran into a number of people who assumed I want Firefox to rule the world, making statements like "you could crush Opera by ...". I tried hard to put the message out that as far as I'm concerned, Firefox is just a vehicle for making the Web better; we need market share to achieve that, but attacking other good browsers does not actually serve my goals. In fact, trying to do so could hurt those goals by wasting resources.

People were excited by the fact that Mozilla development is happening here in Auckland, and that it's not just me. I got to explain my story many times.

I was surprised by how parochial the people from outside Auckland were --- lots of half-serious joking about how much better Wellington is (or Christchurch, or ...). The funny thing is that every complaint they have about Auckland just makes me think about how much worse that problem is in New York!

Friday, 2 February 2007

This weekend is the "Baa Camp" (a.k.a. Kiwi Foo Camp) organized by O'Reilly via Nat Torkington. A number of interesting people are going, and I'm going too. I've had previous contact with several of the attendees: Asa Dotzler is going from Mozilla, Nigel Parker from Microsoft, David Cunliffe (NZ Minister of Information and Communication Technology, among other things), Ben Goodger, and Nat himself. I'm not sure what it'll be like. I have vague fears it will be a lot of hot air, but at the very least I ought to meet a lot of interesting people and have fun. (I used to be cripplingly shy in large groups, but over the years I've become a lot better at schmoozing. I think this is the most important thing I learned in grad school.)

I really ought to demo some Gecko 1.9 features but unfortunately none of my builds are all that exciting right at this moment. Also, it rather steals your thunder when anyone can download a trunk Firefox build and see what's going on. Well, I'll do my best to wow the crowd one way or another.

BTW lately I've been working on text, making progress towards getting our new textrun infrastructure working in Windows and Mac as well as Linux, and working on the new textframe code that sits above it. I've also been make progress on setting up the local Mozilla development office. That's something else I'll be able to talk about at Baa Camp.