Posts from February 2004

Okay, maybe Tantek’s right and the CSS I devised yesterday wasn’t the greatest (note to self: avoid writing journal entries at 4:45am). And yes, it would be more elegant, at least on the markup side, to use the href values to determine how to style links. It feels a touch clumsy, for some reason, maybe because the selectors end up being so long and I’m used to short selectors. Go check out what he has to say and suggestions for better selectors, and while you’re at it go take a look at substring selectors to get ideas for how to do even better. (I don’t think anyone supports *= yet, so you’re likely to have to use ^= instead.)

Back in high school, my best friend Dave and I devised a scenario where water shortages in the American southwest became so severe that states literally went to war with each other over water rights and access, fragmenting the United States in the process. It never really went much of anywhere, just an idea we kicked around, and that I thought about trying to turn into a hex-based strategic wargame but never did. It’s always lurked in the back of my head, though, the idea of climate-driven warfare.

According to Yahoo! News, a Pentagon report asserts that climate change is a major threat to national security; well, actually, to global security. And that if the global climate crossing a “tipping point,” the changes will be radical and swift. In such a situation, economic upheaval will be the least of our concerns—we’ll be more worried about adding to the climate shifts with the aftereffects of nuclear exchanges.

I actually read about this on Fortune.com a few weeks ago, and although now you have to be a member to read the full article at Fortune, there’s a copy at Independent Media TV. The Fortune article characterizes the report as presenting the possible scenarios if global climate shifts occur, but not claiming that they are happening or will happen. It also says that the Pentagon agreed to share the unclassified report with Fortune, whereas the Yahoo! News article says the report was leaked after attempts to hush it up. For that matter, the Yahoo! News article makes it sound like the report claims that The Netherlands will definitely be uninhabitable by 2007, and so on. According to the Fortune article, that was one aspect of a scenario, not a concrete prediction. This is probably due to the Yahoo! News article being a summary of an article in The Observer, which is a production of The Guardian and claims to be the “best daily newspaper on the world wide web.” Uh-huh.

So I guess I’m saying read the Fortune article, as it gives more information and takes a more balanced tone—not that it sounds any less disturbing, really. The fact that the report was commissioned at all suggests that the subject is being taken seriously at the Pentagon, which is not exactly a gathering place for leftist wackos. I’ll be very interested to see what reaction, official or otherwise, is triggered by this report in the weeks to come. My fear is that it doesn’t matter any more, that whatever accusatory words might get thrown around will just be insignificant noise lost in the rising wind.

If you thought XFN or VoteLinks were the last (or only) word on lightweight semantic link annotation, think again. Tantekwrites about the idea of adding a license value to indicate a link that points to licensing terms. In his post, the expression of this idea is centered around Creative Commons (CC) licenses, but as he says, any license-link could be so annotated. Apparently the CC folks agree, because their license generator has been updated to include rel="license" in the markup it creates.
Accordingly, I’ve updated my CC license link for the Color Blender to carry rel="license", thus making it easier for a spider to auto-discover the licensing terms for the Color Blender.

Tantek also said of the idea of applying CSS to documents that uniquely styles license-links:

I wonder who will be the first to post a user style sheet that demonstrates this.

Ooo, me, me! Well, not quite. I don’t have a complete user stylesheet for download, but here are some quick rules I devised to highlight license links. Add any of them to your user stylesheet, or you can use these as the basis for your own styles. (Sorry, but they won’t work in Internet Explorer, which doesn’t support attribute selectors.)

Here’s my question: should the possible values be extended? Because I’d really like to be able to insert information based on what kind of license is being referenced. For example, suppose there were a c-commons value for rel; that way, authors could declare a link to be rel="c-commons license". Then we could use a rule like:

*[rel~="c-commons"]:before {content: url(c-commons.gif);}

…thus inserting a Creative Commons logo before any link that points to a CC license. At the moment, it’s highly likely that the only rel="license" links are going to point to CC licenses, but as we move forward I suspect that will be less and less true. I hope we’ll soon see some finer grains to this particular semantic extension.

If you don’t like using generated content for whatever reason, you could modify the rule to put the icon in the background instead, using a rule something like this:

The usual reason to avoid generated content is that IE doesn’t support it, but then IE doesn’t support attribute selectors either, as I mentioned. So don’t add any of these rules to an IE user stylesheet. Use Firefox, Safari, Opera, or one of the other currently-in-development browsers instead.

In other news, I was tickled pink (or maybe a dusky red) to see that for sol 34, one of the “wake-up” songs for the Spirit team was The Bobs’Pounded on a Rock. My hat’s off to you, Dr. Adler! I’ve been listening to that particular album recently, mostly to relearn the lyrics. I’ve been singing to Carolyn when I feed her, and some favorites of ours are Plastic or Paper, Now I Am A Hippie Again, Corn Dogs, and of course Food To Rent. It’s awfully cute that she smiles at me when I sing to her, mostly because I know one day she’ll grow up, learn about things like “being on key,” and stop smiling when I sing.

There’s something about this picture that really works for me—there’s joy and hope and melancholy all wrapped up together, and that’s a mix I can rarely refuse. It’s available as a 16″ x 20″ poster from Cafépress, and I’m seriously considering making the purchase. If you like the image, or if you support the cause to which all proceeds will go, then get on over there and buy it!

Personally, I do support the cause benefiting from sales of the poster, which is to resist any attempt to amend the United States Constitution to ban same-sex marriages. I primarily support that cause because in my view, there’s no good reason why the subject of who can or can’t be married should be a part of the Constitution, amended or otherwise. I mean, if we’re going to start amending the Constitution to prohibit behaviors we don’t like, then when do I get my amendments banning civilian ownership of vehicles that get less than 30mpg on the highway, poorly formed HTML markup, and televangelists? And if those seem silly, how come my dislikes are less worthy of being Constitutionally enshrined than somebody else’s?

Beyond that, I’m generally supportive of what’s happening in San Francisco, at least in a general sense—I’m not sufficiently informed about the specific legal situation in California to have an opinion about the legalities, but the fundamental purpose is A-OK with me. Because as longtime readers (all four of you) can probably guess, I see no reason why homosexual couples should have any less ability to marry than heterosexual couples. I once was friendly with a couple who had been together twelve years, wore marriage bands, and had thrown a ceremony in which they exchanged the bands. The works, pretty much. Yet they couldn’t get married, legally speaking. They were a far better example of loving pair than a lot of hetero couples I’ve known, and yet they could never be spouses. You might be wondering… were they male or female? It doesn’t matter. Which is, I think, sort of my point. It isn’t original, but I thought it was worth repeating.

Especially since we now have a new federal appeals judge in place, one who said that homosexual acts are comparable to “prostitution, adultery, necrophilia, bestiality, possession of child pornography and even incest and pedophilia.” I’m sorry, but if you can’t perceive a difference between activities engaged in by consenting adults and, say, an action perpetrated by a person upon a corpse or an animal, then you aren’t intellectually qualified to sweep the floor of the federal appeals court, let alone sit on it.

Deep breath. Move on.

I guess Saturday was a day for talking about aggregator experiences; in a post made that day, Meryl put forth
a different perspective on the topic than I did, and at about the same time. I agree with Meryl that an aggregator that can present a styled article should provide the option of disabling that behavior, and just delivering the text content. I just suspect that she and I would have different settings for that preference.

A relatively recent addition to the XFNWhat’s Out There? page is the XFN Dumper favelet, which lists all the XFN-enabled links in a page along with their XFN values. I decided that I wanted a different presentation and a little more information, so I hacked up ben‘s XFN Dumper v0.2 and came up with XFN Dumper v0.21, which is currently in beta due to its problems running in both kinds of Internet Explorer. If you’d like to try it out anyway, you can find it on my new XFN Tools page. Once it exits beta I’ll move it over to the GMPG site.

I’ve spent the last two weeks (minus repair time, of course) running NetNewsWire Lite, and I’ve discovered that it’s addictive in exactly the wrong way: hard to give up, even though I really want to do so. This is no reflection on the program itself, which is excellent. The problem I have is with the fundamental experience.

Allow me to explain. In order to visit all my favorite weblogs/journals/whatever, I had a collection of home page URLs in a group in my favorites toolbar. That way I could open it up and go straight to a site, or else command-click on the folder to open them all up in tabs. The whole group would open up, each site to its own tab, and then I could close each tab as I read what was new, or else determined that there wasn’t anything new since the last time I dropped by.

Now, of course, I have an RSS aggregator that tells me when something new has appeared on a site. Thanks to NetNewsWire, I’ve become much more efficient about keeping up with all the weblogs I read. I’m also losing touch with the sites themselves, and by extension, with the people behind those sites.

What I’ve come to realize is that half the fun of visiting all those sites was seeing them, in enjoying the design and experience that each author went to the effort of creating—the personality of each site, if you will. Sure, I’ve seen The Daily Report a zillion times; who hasn’t? I still got a bit of an emotional boost from dropping by and feeling the orange, even if Jeffrey hadn’t written anything new. The same goes for mezzoblue, and stopdesign, and all the others. Maybe it’s the same impulse that makes me play a record I’ve always liked, or re-read a favorite book for the twentieth time. It doesn’t matter. Part of my connection to the people behind the sites seems to be bound up in actually going there. Using an aggregator interrupts that; it lessens the sense of connection. It distances me from the people I like and respect.

And yet, thanks to that same aggregator, I can keep up with all those weblogs and half again as many news feeds in one tidy package. The latest Slashdot Science and Apple news, xlab OS X, the W3C, and more feeds come pouring in. I don’t have any connection with those sites, so that doesn’t bother me; in the case of Slashdot, I actually prefer getting the feeds because it means I can visit the referenced sites without subjecting myself to the comments.

The obvious solution is to strike a balance: to use the aggregator for news, and go back to my tab group to read personal sites. I’m going to give it a whirl, although the raw efficiency of the aggregator is so compelling that I feel a deep reluctance to unsubscribe from the personal-site feeds.

That’s what I mean by the experience being addictive in exactly the wrong way.

I suspect that what I may do is keep all the feeds, but when any personal site is updated, I’ll go visit them all by command-clicking the bookmark group. That way I’ll catch up with the folks who have something new for me to read, and at the same time visit everyone else—just to say, if only to myself, “You’re still there, and I’m still dropping by to see you, and that’s how it’s supposed to work.”

I expressed a faint hope yesterday that the spam problem would be solved, and wouldn’t you know it, a proposal along those lines popped up in my RSS feeds. A couple of researchers have published a paper describing a way to use social networks as an anti-spam tool. In brief, the idea is to build e-mail cluster maps. As reported in Nature, the researchers:

…decided to tackle the problem by taking advantage of the fact that most people’s e-mail comes from a limited social network, and these networks tend to be clustered into clumps where everyone knows each other.

If I understand the concept correctly, those of you who decide, on a whim, to e-mail me with a question about CSS or to comment on something I’ve written would never get through if I were using such a system. If you’re nowhere near my cluster, then I don’t see how you’re going to get through. If the only criterion for being assumed a ‘real person’ is that you’re sending from a cluster, then all spammers would have to do is form their own clusters.

I’m quite interested in social networks these days, but I’m not sure that spam is one of the problems a social network can really fix. At this point, I’m coming to believe that e-mail delivery fees are the only possible solution, and I have grave doubts it would work. I’ve seen this idea described a few times, and here’s how it generally works.

Everyone gets to set a cost for accepting mail. I could say, for example, any message has to have a 10-cent delivery fee paid for me to even accept it. You might set the threshold at five cents, or 50 cents.

When sending a message, you authorize up to a certain amount to be paid for delivery. I might say that I’ll attach three cents to every outgoing message. For any account with that delivery fee (or lower), the message will reach the inbox, and I’ll be charged three cents. For any account with a higher delivery fee, the message is bounced back with a “needs more money to get through” error.

Anyone can choose to refund the delivery fee, either one at a time or by creating a “free entry” whitelist. So I might set my delivery fee at 50 cents, but permanently give my friends a free pass into the Inbox. For random correspondents with legitimate inquiries, I could give their delivery fee back. For spam, I could read it and collect the delivery fee.

It sounds great, and the technology could probably be created without much difficulty. The general idea is that if you don’t want to see spam, you reject all messages with too low a delivery fee; if you want to stick it to the spammers, you read their messages and collect their money. I still see a few problems with the idea.

If a spammer manages to fool my system into thinking the spam is coming from a friend, it gets in for free. If the mask is good enough, I never get a chance to collect the fee.

Who’s going to volunteer to run the micropayment system that would have to underpin the whole setup? And if there are no volunteers, then where’s the business model that would be needed to get a company to do it?

How does one keep the spammers from hacking, bypassing, or otherwise fooling the micropayment system, and wouldn’t any effective techniques to do so work just as well for the current mail system?

Assuming there is a micropayment structure in place, what’s to keep large ISPs from charging everyone a cent to pass a message through their servers—thus making e-mail no longer free for anybody?

Maybe that last point would be an acceptable price to pay for ending spam. It would pretty much kill off listservs, though, and that would sadden me quite a bit. Even at a penny per message, every post to css-discuss would cost $35.67 to deliver to all the subscribers (as of this writing). On average, we get about 50 posts per day, so that’s $1,783.50 daily, or $650,977.50 annually. If I had that kind of money, I’m pretty sure that I wouldn’t spend it on a mailing list. I’d buy a Navy fighter-bomber instead.

On the other hand, with a delivery-fee e-mail system in place, I could very easily set up an account where people could send their CSS questions for a delivery fee of, say, $29.95. If I accepted delivery, that would get you a detailed answer to your question, or else a refund of the delivery fee if I didn’t answer (or there was no answer to be had). So that would be kind of cool. I suppose I could approximate the general idea today using PayPal or some such, but that would mean going to the effort of setting it up, which isn’t something I’m likely to do given that I have no evidence that there’s any real demand for its existence. Actually, the presence of css-discuss pretty much says that there isn’t, since it’s a whole community of people providing help for free.

So I got started on all that because of the idea of using social networks for spam countermeasures, and like I said I’m interested in social networking these days. In that vein, I was rather amused to see myself at #2 and #4 on rubhub’s new Top 10 lists, and not in the least bit surprised to find Zeldman sitting atop both lists.

I was also quite fascinated by Jonas’ ruminations on how XFN, VoteLinks, and related technologies can easily form the basis for rudimentary trust networks. Jonas, a sociologist by training, has been writing some very interesting things about the semantic web and social networking recently, which is why I’ve just added him to my blogroll and RSS aggregator. As he points out, combining XFN and VoteLinks would be a snap, and has the potential to enrich the semantics of the Web. Instead of just counting links to a page, a community assessment of that page could be tallied.

What interests me even more is the next step. What else can be done with link relationships, and how will the pieces fit together? How many small, modular metadata profiles would it take to begin semanticizing the Web? I suspect not too many. This seems like a clear case of emergent properties just waiting to happen, where every incremental addition dramatically increases the complexity of the whole. John Lennon once said that life is what happens while you’re making other plans. Meaningful technological advancement seems to be what happens while committees are making other plans. It could very well be that the Semantic Web will come to pass because the semantic web arose on the fringes and paved the way—that the latter will become the former, simply by force of evolution. That strikes me as rather poetic, since it means that the principles Tim Berners-Lee followed in creating and defining the Web would become the keys to where he wants to go next.

Okay, enough talking about computer repair; it’s time for another picture of Carolyn.
It’s one of the first good ones we have of her smiling, and this is, for her, a relatively understated smile. When she’s happy, she’ll let loose with grins so wide her eyes scrunch shut. She actually smiles quite often, but each one is of fairly short duration—and when she does smile, we’re too busy enjoying the sudden rush of dopamine and other neurochemical whatnots our brains start pumping out. It’s really, really hard not to smile back. Not that we’re resisting.

I keep meaning to crate an actual picture gallery on her personal page, but other stuff keeps getting in the way. Heck, I haven’t even created an e-mail account for her, mostly on the grounds that it does her no good until she learns to type. I just hope that by the time she’s old enough to want an account, Carolyn won’t have to deal with the volume of spam we see every single day. I’m not holding my breath, though.

Love the haircut, myself. I think this may well be the first published picture for which she tries to hurt me, a decade or so from now. Sorry, sweetheart, but it was just too cute not to share.

When I praised Apple yesterday for their repair service, I didn’t realize just how much praise was due. I was so excited to get my laptop back with working hinges, I hadn’t looked closely at the rest of the exterior. As TiBook owners know, the finish has a tendency to scratch. I’m not sure why that is, although I’m sure a Google search could yield all manner of answer, but the upshot is that the back of the display panel had a few nicks and dings; even a small dimple that prompted someone to ask if the laptop had stopped a bullet for me.

Now it doesn’t. The unknown technician replaced not only the hinges, but also the whole panel backing… and maybe even the whole display panel, screen and all. Now the machine looks as sharp and smooth as the day I bought it.

Let me be clear: those scrapes had nothing to do with the hinge problem. They were the result of “normal wear and tear.” There was absolutely no obligation on Apple’s part to do anything about them, any more than it would be Dell’s responsibility to replace a plastic surface on a Windows laptop that had gotten a scratch after half a year of ownership. While fixing the major problem, the unknown technician noticed that there was something else that could be fixed, and just went ahead and did it. No fuss. It wasn’t even noted on my repair history. It was just done.

I’ve never been sorry to buy Apple products. Now I’m actually proud to be a customer.

As a postscript, I’d like to point out that mine is an older-model Powerbook. The new ones have a much more scratch-resistant surface, and a totally different hinge system. On the new ones, there’s a single large and sturdy hinge that runs most of the width of the machine, occupying about the same amount of space as the gap between my hinges. They have other improvements too, like a backlit keyboard and ports on the sides instead of in the back, and I wish I could have waited another two months to buy my laptop so I’d have one of the new ones. Nothing wrong with mine—the new ones are just cooler.

For those of you using an RSS aggregator, you’re probably going to see all of my entries turn up as new a few more times. I’m adjusting the way I produce the feeds to include an indication of the post length and the categories to which the post belongs as text at the end of the feed description. I may also modify it to include the first sentence of each paragraph instead of just the first sentence of the entire post.

Incidentally, a few of you have asked why I don’t provide the complete post content in my feeds. For me, it’s a bandwidth issue. I was looking over the access statistics for January, and was astonished to find that the two RSS feeds together were accessed over 189,000 times. The home page, by comparison, was hit over 53,000 times. The latter accounts for 9.3% of the outgoing bandwidth; the two feeds together add up to 1.54%. If I were to have the feeds contain full posts, that would increase RSS-feed bandwidth by an order of magnitude at least. It would also reduce the number of 304 (Not Modified) responses the server returns for the RSS files, because I do go back and correct spelling errors and such. The feeds don’t have to be updated when I do, but they would if I provided full post content.

I do have sympathy for those of you using aggregators like NetNewsWire (I’m using the Lite version, myself) and FeedDemon. I’d have more sympathy for LiveJournal users if the LJ server returned 304s, but it never does, forcing me to download the whole feed every time I ask for updates. So I did consider the syndication experience from the user’s point of view. I also have to consider the impact on the server, and frankly, given the way RSS is designed, the potential impact is just too high for me to move to full-content feeds.

Ordinarily, you’d think that an almost weeklong absence indicates a major project, or maybe an illness, or some other major life event. Not this time. This time it was a major computer hardware failure. Not a hard drive, nor a monitor, nor anything you might usually suspect. No, this was far more basic.

Not too long after I posted the previous entry, I was working on my TiBook in the living room. Kat asked me to get something—probably a milk blanket or a pacifier or something baby related—and so I put the laptop, still open, down on the ottoman.

There was a sharp cracking sound.

As it turned out, it had actually been two cracking sounds. Both hinges that connect the laptop’s display panel to the body had snapped clean away from the panel.
Longtime readers may recall I had a similar experience about this time last year while in Santa Fe, New Mexico. Apparently that wasn’t some bizarre and isolated incident. In both cases, I had let go of the laptop panel when it was an inch or two above a well-padded surface. In both cases, something had given way. Amazingly, in both cases the laptop screen continued to function. The extra problem with this latest breakage was that since both hinges had failed, there was nothing to hold up the screen.

Luckily, a few months back an Apple Store opened up in a new mall about five miles from my house. I’d been meaning to get up there and check it out; last Wednesday, I finally did. I would have preferred better circumstances, obviously. So I took my broken laptop to the Genius Bar. As I opened it up and laid it flatter than a TiBook should really ever be, a guy standing nearby said, “Gosh, I’ve always wished I could open my PowerBook up that far.”

“I can show you how,” I said with an arched eyebrow. He declined the offer.

So after looking over the whole machine and hearing my description of how it had happened, the Genius’ guess was that the hinges had been over-torqued. They had been rather stiff ever since I got the machine, actually; it was almost impossible to open the laptop with one hand. So Alan (the Genius) made some notes to the effect that it was a hardware failure, and not the result of abuse, and that it was a covered repair. I changed the administrator password so they could get to the desktop if need be, shut down the system, and then handed the machine over to be shipped to a repair center.

It arrived back at the Apple Store today. That’s five days to ship, repair, and return. It’s one more day than the last time, but there was a weekend involved. The new hinges are a lot smoother than the old ones, too. I’m once more impressed by the speed and service Apple provides. So thanks to Alan at the Genius Bar, to the unknown technician who repaired my poor baby’s spine, and to Apple for continuing to make me glad I’m a customer. Of course I’d rather the laptop had never had any problems, but there will always be problems. The mark of a good company is that they address those inevitable problems professionally and with a minimum of hassle for the customer. As far as I’m concerned, that describes Apple in full.