Primarily the test was to find out how browsers handle repeating-linear-gradient(). Badly, as it turns out, at least for many of them. Chrome is the worst; far worse than Safari, which I found fascinating. So I wasn’t actually posting in search of a way around those problems, though in re-reading the post I can see where that impression might have developed. I was actually running an experiment where my starting hypothesis was that repeating gradients were safe to use. This was proven to be false. (Science!) Having found out that there are glitches and inconsistencies that are sensitive to element size, and seeing how wildly, almost hilariously wrong some engines have gotten this, I came to the conclusion that repeating-linear-gradient() isn’t yet ready for use.

That’s okay. Not everything in CSS is ready for use. Almost everything in CSS wasn’t ready for use, once upon a time. I think color is the one property that was probably stable from the outset, and even that had its quirks in Netscape 4, albeit in the handling of unknown values. (color: inherit resulted in a shade we lovingly referred to as “monkey-vomit green”.)

Now, as for useful alternatives to repeating-linear-gradient(): the most obvious (as in traditional, as in boring) method is to create a PNG or other pixel image and use the various background-related properties. For example, given a desire to have a 5-on-5-off pattern (as seen in test #5), you could create a 10×10 PNG and then tile it something like this:

The advantages here are that A) pixel images are about as safe as you get; and B) if you want to stretch the image vertically, you can do so without having to produce a new image.

A second alternative, only fractionally less safe but rather more efficient, is to replace the external PNG with a regular non-repeating linear gradient. I much prefer this to the suggestion of sizing and tiling a repeating gradient, because the test shows we can’t have any confidence in consistency with repeating gradients right now. (This is particularly true in Chrome, which is the worst with small repeated gradients.) Plain old non-repeating linear gradients, on the other hand, are predictable once you get the syntax right. Thus:

…with, of course, the various vendor-prefixed versions of that first declaration blatted out by your favorite CSS preprocessor, framework, JavaScript shim, brain and fingers, or other tool of choice. The repeating pattern here is handled by the background-repeat declaration instead of relying on repeating-linear-gradient() to do it for you. It’s exactly the same as the first alternative except the pixel image has been replaced with a textually-described image.

So why do browsers not just do that internally? Well, I really don’t know, though it’s quite probable that performance plays an important role. For repetitions along the primary axes, they could certainly do it very easily. But the big advantage of repeating-linear-gradient(), and the place where both alternatives can fall flat on their faces unless you are very careful, is in creating repeating patterns that don’t march on one of the primary axes. Repeating a static linear gradient along the X axis is fine as long as it’s perfectly perpendicular to the X axis, but what happens when you want to repeat a pattern that’s tilted 30 degrees?

Yes, this sort of effect can certainly be worked out by hand or tool—after all, we’ve seen those kinds of patterns done with GIFs and PNGs and the like for decades now—but it’s a lot harder to do. It’s still what I’d recommend right now, because having a reliably repeated background is usually far better than one whose rendering you can’t predict.

The goal of repeating-linear-gradient() was to make it easy to specify a repeating pattern at any angle and not have to worry about the minutiae. Unfortunately, until rendering engines actually start properly handling the minutiae, we need to do it by hand if we do it at all.

My detailed studies of CSS3 have, of late, taken me into the realm of repeating linear gradients. I’m not going to attempt a Lovecraft parody for this post, though I seriously considered it. Instead, I’m going to give you a summary of where we stand with repeating linear gradients, followed by some details.

The summary is as follows: NOT READY FOR PRIME TIME. In fact, not even ready for safe harbor in Europe. Whether they will become ready for use is up to the various browser makers.

I came to this conclusion after creating and evolving a test case for pattern regularity. The test compares a repeated PNG (red) with two theoretically equivalent repeating-linear-gradient images (blue and blue). The two repeating linear gradients are expressed differently, but should yield the same result. The visual result of the test should be a perfect interleaving where the blue and red patterns overlap.

To see what I mean, load up the test in Opera; or take a look at this screenshot of the first eight cases, taken in Opera. Either one will show you the reference rendering for this test case. Regular repetition and seamless interleaving, no matter what you do with the browser window’s size. That was and is the intended result.

If you want to go all the way to the other end of the spectrum, load up the same test in Chrome or Canary. It will be…different, at least in my testing on OS X and Windows. For extra fun, try dragging the browser window to resize. AHHHHGGGH.

In Firefox 12/Aurora 14 and Safari 5.1.7, all on OS X, I see something very close to Opera, except there are little one-pixel missteps here and there. Even better, the positions of these missteps will be different between the two gradient patterns; that is, for a given test the missteps on the top test will almost certainly be different than those on the bottom—assuming there are any at all. And all that was about as vague as can be because the missteps depend on the width of the element; again, try drag-resizing the browser window. Crawling artifacts!

I’m sorry, I promised no Lovecraft.

I’ve been told that Firefox 12 for Windows is rock-steady and Opera-regular, but I haven’t yet been able to verify that. I also haven’t tried out IE10 to see where, if anywhere, they stand with this. I imagine every build of every Linux browser ever has this nailed 100%, because that’s all Linux users say every time I launch one of these tests. Go you!

The point of all this being, as if I needed to restate it: don’t depend on repeating-linear-gradient for much of anything right now. There is pretty clearly a metric massload of work to be done before we can start calling these safe to use.

I recently stumbled over a subtle interaction between cookie policies and localStorage in Firefox. Herewith, I document it for anyone who might run into the same problem (all four of you) as well as for you JS developers who are using, or thinking about using, locally stored data. Also, there’s a Bugzilla report, so either it’ll get fixed and then this won’t be a problem or else it will get resolved WONTFIX and I’ll have to figure out what to do next.

The basic problem is, every newfangled “try code out for yourself” site I hit is just failing in Firefox 11 and 12. Dabblet, for example, just returns a big blank page with the toolbar across the top, and none of the top-right buttons work except for the Help (“?”) button. And I write all that in the present tense because the problem still exists as I write this.

What’s happening is that any attempt to access localStorage, whether writing or reading, returns a security error. Here’s an anonymized example from Firefox’s error console:

When you go to line 666, you discover it refers to localStorage. Usually it’s a write attempt, but reading gets you the same error.

But here’s the thing: it only does this if your browser preferences are set so that, when it comes to accepting cookies, the “Keep until:” option is set to “ask me every time”. If you change that to either of the other two options, then localStorage can be written and read without incident. No security errors. Switch it back to “ask me every time”, and the security errors come back.

Just to cover all the bases regarding my configuration:

Firefox is not in Private Browsing mode.

dom.storage.default_quota is 5120.

dom.storage.enabled is true.

Also: yes, I have my cookie policy set that way on purpose. It might not work for you, but it definitely works for me. “Just change your cookie policy” is the new “use a different browser” (which is the new “get a better OS”) and it ain’t gonna fly here.

The user agent may throw a SecurityError exception instead of returning a Storage object if the request violates a policy decision (e.g. if the user agent is configured to not allow the page to persist data).

I haven’t configured anything to not persist data—quite the opposite—and my policy decision is not to refuse cookies, it’s to ask me about expiration times so I can decide how I want a given cookie handled. It seems to me that, given my current preferences, Firefox ought to ask me if I want to accept local storage of data whenever a script tries to write to localStorage. If that’s somehow impossible, then there should at least be a global preference for how I want to handle localStorage actions.

Of course, that’s all true only if localStorage data has expiration times. If it doesn’t, then I’ve already said I’ll accept cookies, even from third-party sites. I just want a say on their expiration times (or, if I choose, to deny the cookie through the dialog box; it’s an option). I’m not entirely clear on this, so if someone can point to hard information on whether localStorage does or doesn’t time out, that would be fantastic. I did see:

User agents should expire data from the local storage areas only for security reasons or when requested to do so by the user.

…from the same section, which to me sounds like localStorage doesn’t have expiration times, but maybe there’s another bit I haven’t seen that casts a new light on things. As always, tender application of the Clue-by-Four of Enlightenment is welcome.

Okay, so the point of all this: if you’re getting localStorage failures in Firefox, check your cookies expiration policy. If that’s the problem, then at least you know how to fix it—or, as in my case, why you’ll continue to have localStorage problems for the next little while. Furthermore, if you’re writing JS that interacts with localStorage or a similar local-data technology, please make sure you’re looking for security exceptions and other errors, and planning appropriate fallbacks.

This morning I caught a pointer to TypeButter, which is a jQuery library that does “optical kerning” in an attempt to improve the appearance of type. I’m not going to get into its design utility because I’m not qualified; I only notice kerning either when it’s set insanely wide or when it crosses over into keming. I suppose I’ve been looking at web type for so many years, it looks normal to me now. (Well, almost normal, but I’m not going to get into my personal typographic idiosyncrasies now.)

My reason to bring this up is that I’m very interested by how TypeButter accomplishes its kerning: it inserts kern elements with inline style attributes that bear letter-spacing values. Not span elements, kern elements. No, you didn’t miss an HTML5 news bite; there is no kern element, nor am I aware of a plan for one. TypeButter basically invents a specific-purpose element.

I believe I understand the reasoning. Had they used span, they would’ve likely tripped over existing author styles that apply to span. Browsers these days don’t really have a problem accepting and styling arbitrary elements, and any that do would simply render type their usual way. Because the markup is script-generated, markup validation services don’t throw conniption fits. There might well be browser performance problems, particularly if you optically kern all the things, but used in moderation (say, on headings) I wouldn’t expect too much of a hit.

The one potential drawback I can see, as articulated by Jake Archibald, is the possibility of a future kern element that might have different effects, or at least be styled by future author CSS and thus get picked up by TypeButter’s kerns. The currently accepted way to avoid that sort of problem is to prefix with x-, as in x-kern. Personally, I find it deeply unlikely that there will ever be an official kern element; it’s too presentationally focused. But, of course, one never knows.

If TypeButter shifted to generating x-kern before reaching v1.0 final, I doubt it would degrade the TypeButter experience at all, and it would indeed be more future-proof. It’s likely worth doing, if only to set a good example for libraries to follow, unless of course there’s downside I haven’t thought of yet. It’s definitely worth discussing, because as more browser enhancements are written, this sort of issue will come up more and more. Settling on some community best practices could save us some trouble down the road.

Update 23 Mar 12: it turns out custom elements are not as simple as we might prefer; see the comment below for details. That throws a fairly large wrench into the gears, and requires further contemplation.

My hope is that the interview brings clarity to a situation that has suffered from a number of misconceptions. I do not necessarily hope that you agree with Tantek, nor for that matter do I hope you disagree. While I did press him on certain points, my goal for the interview was to provide him a chance to supply information, and insight into his position. If that job was done, then the reader can fairly evaluate the claims and plans presented. What conclusion they reach is, as ever, up to them.

We’ve learned a lot over the past 15-20 years, but I’m not convinced the lessons have settled in deeply enough. At any rate, there are interesting times ahead. If you care at all about the course we chart through them, be involved now. Discuss. Deliberate. Make your own case, or support someone else’s case if they’ve captured your thoughts. Debate with someone who has a different case to make. Don’t just sit back and assume everything will work out—for while things usually do work out, they don’t always work out for the best. Push for the best.

Right in the middle of AEA Atlanta—which was awesome, I really must say—there were two announcements that stand to invalidate (or at least greatly alter) portions of the talk I delivered. One, which I believe came out as I was on stage, was the publication of the latest draft of the CSS3 Positioned Layout Module. We’ll see if it triggers change or not; I haven’t read it yet.

The other was the publication of the minutes of the CSS Working Group meeting in Paris, where it was revealed that several vendors are about to support the -webkit- vendor prefix in their own very non-WebKit browsers. Thus, to pick but a single random example, Firefox would throw a drop shadow on a heading whose entire author CSS is h1 {-webkit-box-shadow: 2px 5px 3px gray;}.

As an author, it sounds good as long as you haven’t really thought about it very hard, or if perhaps you have a very weak sense of the history of web standards and browser development. It fits right in with the recurring question, “Why are we screwing around with prefixes when vendors should just implement properties completely correctly, or not at all?” Those idealized end-states always sound great, but years of evidence (and reams upon reams of bug-charting material) indicate it’s an unrealistic approach.

As a vendor, it may be the least bad choice available in an ever-competitive marketplace. After all, if there were a few million sites that you could render as intended if only the authors used your prefix instead of just one, which would you rather: embark on a protracted, massive awareness campaign that would probably be contradicted to death by people with their own axes to grind; or just support the damn prefix and move on with life?

The practical upshot is that browsers “supporting alien CSS vendor prefixes”, as Craig Grannell put it, seriously cripples the whole concept of vendor prefixes. It may well reduce them to outright pointlessness. I am on record as being a fan of vendor prefixes, and furthermore as someone who advocated for the formalization of prefixing as a part of the specification-approval process. Of course I still think I had good ideas, but those ideas are currently being sliced to death on the shoals of reality. Fingers can point all they like, but in the end what matters is what happened, not what should have happened if only we’d been a little smarter, a little more angelic, whatever.

I’ve seen a proposal that vendors agree to only support other prefixes in cases where they are un-prefixing their own support. To continue the previous example, that would mean that when Firefox starts supporting the bare box-shadow, they will also support -webkit-box-shadow (and, one presumes, -ms-box-shadow and -o-box-shadow and so on). That would mitigate the worst of the damage, and it’s probably worth trying. It could well buy us a few years.

Non-WebKit vendors are in a corner, and we helped put them there. If the proposed prefix change is going to be forestalled, we have to get them out. Doing that will take a lot of time and effort and awareness and, above all, widespread interest in doing the right thing.

Thus my fairly deep pessimism. I’d love to be proven wrong, but I have to assume the vendors will push ahead with this regardless. It’s what we did at Netscape ten years ago, and almost certainly would have done despite any outcry. I don’t mean to denigrate or undermine any of the efforts I mentioned before—they’re absolutely worth doing even if every non-WebKit browser starts supporting -webkit- properties next week. If nothing else, it will serve as evidence of your commitment to professional craftsmanship. The real question is: how many of your fellow developers come close to that level of commitment?

And I identify that as the real question because it’s the question vendors are asking—must ask—themselves, and the answer serves as the compass for their course.

Here’s an interesting little test case for transitions. Obviously you’ll need to visit it in a browser that supports CSS transitions, and additionally also CSS 2D transforms. (I’m not aware of a browser that supports the latter without supporting the former, but your rendering may vary.)

In Webkit and Gecko, hovering the first div causes the span to animate a 270 degree rotation over one second, but when you unhover the div the span immediately snaps back to its starting position. In Opera 11, the span is instantly transformed when you hover and instantly restored to its starting position when you unhover.

In all three (Webkit, Gecko, and Opera), hovering the second div triggers a one-second 270-degree rotation of the span. Unhovering causes the rotation animation to be reversed; that is, a one-second minus-270-degree rotation—or, if you mouseout from the div before the animation finishes, an rotation from that angle back to the starting position. Either way, it’s completely consistent across browsers.

The difference is that in the first test case, both the transform and the transition are declared on hover. Like this (edited for clarity):

It’s an interesting set of results. Only the second case is consistently animated across the tested browsers, but the first case only animates one direction in Webkit and Gecko. I’m not sure which, if any, of these results is more correct than the other. It could well be that they’re all correct, even if not consistent; or that they’re all wrong, just in different ways.

At any rate, the takeaway here is that you probably don’t want to apply your transition properties to the hover state of the thing you’re transitioning, but to the unhovered state instead. I say “probably” because maybe you like that it transitions on mouseover and instantly resets on mouseout. I don’t know that I’d rely on that behavior, though. It feels like the kind of thing that programmer action, or even spec changes, will take away.

I recently became re-acquainted with a ghost, and it looked very, very familiar. In the spring of 1995, just over a year into my first Web gig and still just over a year away from first encountering CSS, I wrote the following:

Writing to the Norm

No, not the fat guy on “Cheers.” Actually, it’s a fundamental issue every Web author needs to know about and appreciate.

Web browsers are written by different people. Each person has their own idea about how Web documents should look. Therefore, any given Web document will be displayed differently by different browsers. In fact, it will be displayed differently by different copies of the same browser, if the two copies have different preferences set.

Therefore, you need to keep this principle foremost in your mind at all times: you cannot guarantee that your document will appear to other people exactly as it does to you. In other words, don’t fall into the trap of obsessively re-writing a document just to get it to “fit on one screen,” or so a line of text is exactly “one screen wide.” This is as pointless as trying to take a picture that will always be one foot wide, no matter how big the projection screen. Changes in font, font size, window size, and so on will all invalidate your attempts.

On the other hand, you want to write documents which look acceptable to most people. How? Well, it’s almost an art form in itself, but my recommendation is that you assume that most people will set their browser to display text in a common font such as Times at a point size of somewhere between 10 and 15 points. While you shouldn’t spend your time trying to precisely engineer page arrangement, you also shouldn’t waste time worrying about how pages will look for someone whose display is set to 27-point Garamond.

That’s from “Chapter 1: Terms and Concepts” of Introduction to HTML, my first publication of note and the first of three tutorials dedicated to teaching HTML in a friendly, interactive manner. The tutorials were taken down a couple of years ago by their host organization, which made me a bit sad even though I understood why they didn’t want to maintain the pages (and deal with the support e-mail) any longer.

However, thanks to a colleague’s help and generosity I recently came into possession of copies of all three. I’m still pondering what to do about it. To put them back on the web would require a bit more work than just tossing them onto a server, and to make the quizzes fully functional would take yet more work, and after all this time some of the material is obsolete or even potentially misleading. Not to mention the page is laid out using a table (woo 1995!). On the other hand, they’d make an interesting historical document of sorts, a way to let you young whippersnappers know what it was like in the old days.

Reading through them, now sixteen years later, has been an interesting little trip down memory lane. What strikes me most, besides the fact that my younger self was a better writer than my current self, is how remarkably stable the Web’s fluidity has been over its lifetime. Yes, the absence of assuredly-repeatable layout is a core design principle, but it’s also the kind of thing that tends to get engineered away, particularly when designers and the public both get involved. Its persistence hints that it’s something valuable and even necessary. If I had to nominate one thing about the Web for the title of “Most Under-appreciated”, I think this would be it.