Software writers in the 1980s liked to talk about how object technology would be the silver bullet that allowed re-use and composition of software systems, moving programming from a cottage industry where everyone makes everything from scratch to a production-line enterprise where standard parts fit together to provide a base for valuable products. It wasn’t; the sharing-required software license was.

I feel that the author is using object oriented software modeling as a strawman, but his point still stands: the critical enabler of modern software is not technical, it is political.

I would go even further and argue that the critical enabler of modern technology is not technical, it is political – intellectual property law is but one egregious example of how political trumps technical in terms of impact… Technical is essential, but though it may subvert a system, it does not overcome oppression on its own.

So political apathy as shown by staggering voter abstention in the latest European elections has immediate technological impact. Political involvement is not futile – it is actually required for technological progress… Get political !

I don’t recall any French politician at minister level so plainly taking side with free software :

Free software is a crucial asset for our economy, in more than one way. First, it enables the struggle against technological dependance upon actors who own our everyday computing tools – it is therefore a true guarantee of digital sovereignty. Furthermore, as we see today and contrary to popular myth, free and open source create jobs. Original business models have been invented and they are important factors in productivity and competitiveness for both private and public sectors who can in this way better control their holdings and concentrate their efforts on their specific value additions. Finally, free software undermines rent-seeking behaviours adverse to innovation, and therefore aids in the emergence of new economic champions.

Will the bold ideas instantly translate into action ? No one expects magic – but with policy laid out so clearly, there is reason to believe that the French government is headed in the right direction.

Let’s take note of those good intentions, keep an eye on the actions that should follow, spread the word that free software is a crucial economic asset and vote for those who understand that !

Oh noes – I’m writing about a Google product, again. The omnipresence of the big G in my daily environment is becoming a bit excessive, so I’m stepping up my vigilance about not getting dependent on their services – though I don’t mind them knowing everything about me. In that light, acquiring another Android communicator may not seem logical, but I’m afraid that it is currently the choice of reason : I would have paid pretty much any price for a halfway decent Meego device, but Nokia’s open rejection of its own offspring is just too disgusting to collude with. The Openmoko GTA04 is tempting, but it is not yet available and I need a device right now.

Android does not quite mean I have to remain attached to the Google tit : thanks to CyanogenMod there is now an Android distribution free of Google applications – and it also offers a variety features and enhancements… Free software is so sweet !

As a bonus, CyanogenMod is also free of the hardware manufacturer’s pseudo-improvements or the carrier’s dubious customizations – those people just can’t keep themselves from mucking with software… Please keep to manufacturing hardware and providing connectivity – it is hard enough to do right that you don’t have to meddle and push software that no one wants !

So when I went shopping for a new Android device after my one year old daughter disappeared my three year-old HTC Magic, I made sure that the one I bought was compatible with CyanogenMod. I chose the Motorola Defy because it is water-resistant, somewhat rugged and quite cheap too. By the way, I bought it free from access provider SIM lock – more expensive upfront, but the era of subsidized devices is drawing to an end and I’m going to enjoy the cheaper subscriptions.

On powering-on the Defy, the first hurdle is to get past the mandatory Motoblur account creation – not only does Motorola insist on foisting its fat supplements on you, but it won’t let you access your device until you give it an email address… In case I was not already convinced that I wanted to get rid of this piece of trash, that was a nice reminder.

This Defy was saddled with some Android 2.2.2 firmware – I don’t remember the exact version. I first attempted to root it using Z4root, but found no success with that method. Then I tried with SuperOneClick and it worked, after some fooling around to find that USB debugging must not be enabled until after the Android device is connected to the PC – RTFM ! There are many Android rooting methods – try them until you find the one that works for you : there is much variety in the Android ecosystem, so your mileage may vary.

Now that I have gained control over a piece of hardware that I bought and whose usage should therefore never have been restricted by its manufacturer in the first place, the next step is to put CyanogenMod on it. Long story short : I fumbled with transfers and Android boot loader functionalities that I don’t yet fully understand, so I failed and bricked my device. In the next installment of this adventure, I’m sure I’ll have a nice tale of success to tell you about – meanwhile this one will be a tale of recovery.

This brick situation is a Motorola Defy with blank screen and a lit white diode on its front. The normal combination of the power and volume keys won’t bring up the boot loader’s menu on start. But thanks to Motorola’s hardware restrictions designed to keep the user from modifying the software, the user is also kept from shooting himself in the foot and the Defy is only semi-bricked and therefore recoverable. Saved by Motorola’s hardware restrictions… Every cloud has a silver lining. But had the device been completely open and friendly to alien software, I would not have had to hack at it in the first place, I would not had bricked it and there would have been no need for saving the day – so down with user-hostile hardware anyway !

After re-flashing with RSD Lite, I found that there is a Linux utility for flashing Motorola Android devices : sbf_flash – that would have saved me from borrowing my girlfriend’s Windows laptop… But I would have needed it for SuperOneClick though – isn’t it strange that support tools for Android are Windows-dependent ?

But first I have to successfully transfer it to my Android device’s flash memory… And that will be for another day.

If you need further information about hacking Android devices, great places are Droid Forums and the XDA-Developpers forum – if you don’t go directly, the results of your searches will send you there anyway.

I loathe Facebook and its repressive user-hostile policy that provides no value to the rest of the Web. But like that old IRC channel known by some of you, I keep an account there because some people I like & love are only there. I seldom go to Facebook unless some event, such as a comment on one of the posts that I post there through Pixelpipe, triggers a notification by mail. I would like to treat IRC that way: keeping an IRC application open and connected is difficult when mobile or when using the stupid locked-down mandatory corporate Windows workstation, and I’m keen to eliminate that attention-hogging stream from my environment – especially when an average of two people post a dozen lines a day, most of which are greetings and mealtimes notifications. But when a discussion flares up there, it is excellent discussion… And you never know when that will happen – so you need to keep an eye on the channel. Let’s delegate the watching to some automation !

So let me introduce to you to my latest short script : bipIRCnickmailnotify.sh – it sends IRC log lines by mail when a specific string is mentioned by other users. Of course in the present use case I set it up to watch for occurrences of my nickname, but I could have set it to watch any other string. The IRC logging is done by the bip IRC proxy that among other things keeps me permanently present on my IRC channels of choice and provides me with the full backlog whenever I join with a regular IRC client.

This Unix shell script also uses ‘since’ – a Unix utility similar to ‘tail’ that unlike ‘tail’ only shows the lines appended since the last execution. I’m sure that ‘since’ will come handy in the future !

When I set up an Ubuntu host, I can’t help feeling like I’m installing some piece of proprietary software. Or course that is not the case : Ubuntu is (mostly) free software and as controversial as Canonical‘s ambitions, inclusion of non-free software or commercial services may be, no one can deny its significant contributions to the advancement of free software – making it palatable to the desktop mass market not being the least… I’m thankful for all the free software converts that saw the light thanks to Ubuntu. But nevertheless, in spite of all the Ubuntu community outreach propaganda and the involvement of many volunteers, I’m not feeling the love.

It may just be that I have not myself taken the steps to contribute to Ubuntu – my own fault in a way. But as I have not contributed anything to Debian either, aside from supporting my fellow users, religiously reporting bugs and spreading the gospel, I still feel like I’m part of it. When I install Debian, I have a sense of using a system that I really own and control. It is not a matter of tools – Ubuntu is still essentially Debian and it features most of the tools I’m familiar with… So what is it ? Is it an entirely subjective feeling with no basis in consensual reality ?

Again, I’m pretty sure that Mark Shuttleworth means well and there is no denying his personal commitment, but the way the whole Canonical/Ubuntu apparatus communicates is arguably top-down enough to make some of us feel uneasy and prefer going elsewhere. This may be a side effect of trying hard to show the polished face of a heavily marketed product – and thus alienating a market segment from whose point of view the feel of a reassuringly corporate packaging is a turn-off rather than a selling point.

Surely there is is more about it than the few feelings I’m attempting to express… But anyway – when I use Debian I feel like I’m going home.

And before you mention I’m overly critical of Ubuntu, just wait until you hear my feelings about Android… Community – what community ?

I stumbled upon Peter Hutterer’s “thoughts on Linux multitouch” which gives a good overview of the challenges facing X.org & al. in developing multitouch over Linux. Among other things he explains why, in spite of end-user expectations to the contrary shaped by competitive offerings, Linux multitouch is not yet available:

“Why is it taking us so long when there’s plenty of multitouch offerings out there already ? The simple answer is: we are not working on the same problem.

If we look at commercial products that provide multitouch, Apple’s iPhones and iPads are often the first ones that come to mind. These provide multitouch but in a very restrictive setting: one multi-touch aware application running in full-screen. Doing this is suprisingly easy from a technical point of view, all you need is a new API that you write all new applications against. It is of course still hard to make it a good API and design good user interfaces for the new applications, but that is not a purely technical problem anymore. Apple’s products also provide multitouch in a new setting, an evironment that’s closer to an appliance than a traditional desktop. They have a defined set of features, different form factors, and many of the user expectations we have on the traditional desktop do not exist. For example, hardly anyone expects Word or OpenOffice to run as-is on an iPhone.

The main problems we face with integrating multitouch support into the X server is the need for the traditional desktop. Multitouch must work across multiple windowed application windows, with some pointer emulation to be able to use legacy applications on a screen. I have yet to see a commercial solution that provides this, even the Microsoft Surface applications I’ve played with so far only emulate this within very restrictive settings”.

In summary, the reason why Linux multitouch lags behind some of its competitors is that it is a significantly more ambitious project with bigger challenges to overcome.

If computer reading is cheaper and more convenient, can free digital publishing lead to sale of same data on physical substrate ? Free data on physical substrate has market value if the substrate has value on its own or if the data has sentimental value. That is a potential axis of development for the traditional publishing industry : when nostalgia and habits are involved, the perceived value of the scarce physical substrate of digitally abundant data may actually increases. Of course, free data has value on its own – but, as the reader of this blog certainly knows, it involves a business model entirely different to physical items.

Identification of content producers, quality control, aggregation, packaging… This is what a traditional editor does – and it is also what a Linux distribution does. Isn’t it ironical that those the Free software world and the world or traditional publishing have had such a hard time understanding each other ?

Some actors did catch the wave early on. In the mid-nineties, I remember that my first exposure to Free software took the form of a Walnut Creek CD-ROM – at the time there was a small publishing industry based on producing and distributing physical media filled with freely available packages for those of us stuck across tens of kilobytes thin links in the Internet’s backwaters. And there were other before : since time immemorial, the Free software industry has understood that the market role of producing data on physical substrate is distinct and independent from managing the data. As Glyn Moody remarked, it is only a matter of time before the media industry as a whole gets it.

“I think Compuserve as a business is going to change very radically,” said David Strom, a communications and networking consultant in Port Washington, N.Y. “It could be they’re going to become a pipe, an access provider to the Internet, rather than a content provider.”

But Compuserve, like other on-line services, says it will continue to find ways to differentiate its offerings from databases of similar information on the Internet, by providing better search tools, a more organized approach and better customer service.

Compuserve has just released a CD-ROM, to be updated bimonthly, that works with its consumer on-line service to add video clips and music to the service in a magazine-like format. In the first edition, for example, users can view a video clip from a Jimmy Buffett concert and then with a click of the mouse connect to the Compuserve on-line service where they can order the audio CD. All the on-line services are working to add multimedia.

“Compuserve has 15 years experience in organizing that data and making it easy for them to find it and grab it,” Mr. Hogan said. “It’s not just a user interface issue but how content is packaged.”

The history of Compuserve since then shows that they were never able to fully execute that vision. But it shows how long it took for the idea of free data as lifeblood of a multi-industry symbiotic organism to get from visionaries to a mainstream business model.

The quality of OpenStreetMap‘s work speaks for itself, but it seems that we need to speak about it too – especially now that Google is attempting to to appear as holding the moral high ground by using terms such as “citizen cartographer” that they rob of its meaning by conveniently forgetting to mention the license under which the contributed data is held. But in the eye of the public, the $50000 UNICEF donation to the home country of the winner of the Map Maker Global Challenge lets them appear as charitable citizens.

We need to explain why it is a fraud, so that motivated aspiring cartographers are not tempted to give away their souls for free. I could understand that they sell it, but giving it to Google for free is a bit too much – we must tell them. I’m pretty sure that good geographic data available to anyone for free will do more for the least developed communities than a 50k USD grant.

“Kibera in Nairobi, Kenya, widely known as Africa’s largest slum, remains a blank spot on the map. Without basic knowledge of the geography and resources of Kibera it is impossible to have an informed discussion on how to improve the lives of residents. This November, young Kiberans create the first public digital map of their own community”.

And they did it with OpenStreetMap. To the million of people living in this former terra incognita with no chance of profiting a major mapping provider, how much do you think having at last a platform for services that require geographical information without having to pay Google or remain within the limits of the uses permitted by its license is worth ?

I answered this piece at ReadWriteWeb and I suggest that you keep an eye for opportunities to answer this sort of propaganda against libre mapping.

But what are we excited about ? In a nutshell : automatic vectorization for parallel execution of any known code graph with no data dependencies between iterations is why Larabee is about. That means that in many cases, the developper can take his existing code and get easy parallel execution for free.

Larrabee enables GPU-class performance on a fully general x86 CPU; most importantly, it does so in a way that is useful for a broad spectrum of applications and that is easy for developers to use. The key is that Larrabee instructions are “vector-complete.”

More precisely: Any loop written in a traditional programming language can be vectorized, to execute 16 iterations of the loop in parallel on Larrabee vector units, provided the loop body meets the following criteria:

Its call graph is statically known.

There are no data dependencies between iterations.

Shading languages like HLSL are constrained so developers can only write code meeting those criteria, guaranteeing a GPU can always shade multiple pixels in parallel. But vectorization is a much more general technology, applicable to any such loops written in any language.

This works on Larrabee because every traditional programming element — arithmetic, loops, function calls, memory reads, memory writes — has a corresponding translation to Larrabee vector instructions running it on 16 data elements simultaneously. You have: integer and floating point vector arithmetic; scatter/gather for vectorized memory operations; and comparison, masking, and merging instructions for conditionals.

This wasn’t the case with MMX, SSE and Altivec. They supported vector arithmetic, but could only read and write data from contiguous locations in memory, rather than random-access as Larrabee. So SSE was only useful for operations on data that was naturally vector-like: RGBA colors, XYZW coordinates in 3D graphics, and so on. The Larrabee instructions are suitable for vectorizing any code meeting the conditions above, even when the code was not written to operate on vector-like quantities. It can benefit every type of application!

A vital component of this is Intel’s vectorizing C++ compiler. Developers hate having to write assembly language code, and even dislike writing C++ code using SSE intrinsics, because the programming style is awkward and time-consuming. Few developers can dedicate resources to doing that, whereas Larrabee is easy; the vectorization process can be made automatic and compatible with existing code.

With cores proliferating on an more CPUs every day and an embarrassing number of applications not taking advantage of it, bringing easy parallel execution to the masses means a lot. That’s why I’m eager to see what Intel has in store for the future of Larrabee.

I still hate having to use Google Calendar and Google Contacts for synchronization. I hope that SyncML synchronization will appear in the future, make Android a better desktop citizen and provide more choice of end points. Meanwhile I use Google. With that out of the way, let’s move on to my impressions of Android itself.

I am grateful for features such as a decent web browser on a mobile device, for a working albeit half baked packaging and distribution system, and for Google Maps which I consider both a superlative application in its own right and the current killer albeit proprietary infrastructure for location enabled applications. But the rigidly simple interface that forces behaviours upon its user feels like a straitjacket : the overbearing feeling when using Android is that its designers have decided that simplicity is to be preserved at all costs regardless of what the user prefers.

Why can’t I select a smaller font for my list items ? Would a parameter somewhere in a customization menu add too much complication ? Why won’t you show me the raw configuration data ? Is it absolutely necessary to arbitrarily limit the number of virtual desktops to three ? From the point of a user who is just getting acquainted with such a powerful platform, those are puzzling questions.

I still don’t like the Android ‘s logic, and moreover I still don’t quite understand it. Of course I manage to use that system, but after five month of daily use it still does not feel natural. Maybe it is just a skin-deep issue or maybe I am just not the target audience – but some features are definitely backwards – package management for example. For starters, the “My Downloads” list is not ordered alphabetically nor in any apparently meaningful order. Then for each upgradeable package, one must first browse to the package, then manually trigger the upgrade package, then acknowledge system privileges the upgraded package and finally clear the download notification and the update notification. Is this a joke ? This almost matches the tediousness of upgrading Windows software – an impressive feat considering that the foundations of Android package management seem serious enough. Where is my APT ?

Like any new user on a prosperous enough system, I am lost in choices – but that is an embarrassment of riches. Nevertheless, I wonder why basics such as a task manager are not installed by default. In classic Unix spirit, even the most basic system utilities are independent applications. But what is bearable and even satisfying on a system with a decent shell and package management with dependencies becomes torture when installing a package is so clumsy and upgrading it so tedious.

Tediousness in package management in particular and user interaction in general makes taming the beast an experience in frustration. When installing a bunch of competing applications and testing them takes time and effort. Experimenting is not the pleasure it normally is on a Linux system. The lack of decent text entry compounds the feeling. Clumsy text selection makes cut and paste a significant effort – something Palm did make quick, easy and painless more than ten years ago. Not implementing pointer-driven selection – what were the developers thinking ?

PIM integration has not progressed much. For a given contact, there is no way to look at a communications log that spans mail, SMS and telephony: each of them is its own separate universe. There is no way to have a list of meetings with a given contact or at given location.

But there basic functionality has been omitted too. For example when adding a phone number to an existing contact, search is disabled – you have to scroll all the way to the contact. There is no way to search the SMS archive and SMS to multiple recipients is an exercise left to applications.

Palm OS may have been unstable, incapable of contemporary operating system features, offering only basic functionality and generally way past its shelf date. But in the mind of users, it remains the benchmark against which all PIM systems are judged. And to this day I still don’t see anything beating Palm OS on its home turf of PIM core features and basic usability.

Palm OS was a poster child for responsiveness, but on the Android everything takes time – even after I have identified and killed the various errant applications that make it even slower. Actually, the system is very fast and capable of feats such as full-motion video that were far beyond the reach of the Palm OS. But the interaction is spoilt by gratuitous use of animations for everything. Animations are useful for graphically hinting the novice user about what is going on – but then hey are only a drag. But please let me disable animations as I do on every desktop I use !

The choice of a virtual keyboard is my own mistake and I am now aware that I need a physical keyboard. After five months, I can now use the virtual keyboard with enough speed and precision for comfortable entry of a couple of sentences. But beyond that it is tiring and feels too clumsy for any meaningful work. This is a major problem for me – text entry is my daily bread and butter. I long for the Treo‘s keyboard or even the one on the Nokia E71 – they offered a great compromise of typing speed and compacity. And no multitouch on the soft keyboard means no keyboard shortcuts which renders many console applications unusable – sorry Emacs users.

The applications offering is still young and I cannot blame it for needing time to expand and mature. I also still need to familiarize myself with Android culture an develop the right habits to find my way instinctively and be more productive. After five months, we are getting there – one handed navigation has been done right. But I still believe that a large part of the user interface conventions used on the Android does not match the expectations for general computing.

It seems like everything has been meticulously designed to bury under a thick layer of Dalvik and Google plaster anything that could remind anyone of Unix. It is very frustrating to know that there is a Linux kernel under all that, and yet to suffer wading knee-deep in the marshes of toyland. The more I use Android an study it, the more I feel that Linux is a mere hardware abstraction layer and the POSIX world a distant memory. This is not the droid I’m looking for.

I wish to call into question a fundamental assumption that has been made about this effort, the assumption that has held up development for years: that multiple layout capability must exist before outline view can be useful.

This is holding up outline view because multiple layout capability (issue 81480) is a big effort, and it, in turn, requires refactoring of writer’s usage of the drawing layer (issue 100875) and the latter has some significant technical difficulties. It seems unlikely that these issues will be finished soon.

The logic behind this assumption is that switching views will take too long if multiple layouts are not possible and/or most users will need simultaneous viewing for outline view to be useful. I disagree with both these assertions.

1. Simultaneous viewing is not necessary. I have been using Word’s outline view extensively for years without simultaneous viewing. Even though it’s possible with split screens, it takes up screen real estate that I want to use otherwise.

2. It won’t take that long to switch layouts [..]

I, for one, would much rather have an outline view soon, one that takes a couple of seconds to switch, and which is available only as a single view, than wait the extra time it is going to take for the multiple-layout refactoring to be finished. That would be enough for me for a long time.

This is a case of “perfect” being the enemy of “good enough”. Let’s just have “good enough” for a while first.

Is his experience anecdotal, or do people really seldom or never use Microsoft Word’s outline view simultaneously with another view ? Other users have chimed in, but me too contributions will soon be boring… So here is my attempt at helping quantify user expectations : this poll !

Of course, self selection by passionate users and links from OpenOffice forums will certainly bias the sampling beyond any semblance of representativity, but we’ll take that as better than nothing…

If you want to skip the making-of story, you can go straight to the laconica2IRC.pl script download. Or in case anyone is interested, here is the why and how…

Some of my best friends are die-hard IRC users that make a point of not touching anything remotely looking like a social networking web site, especially if anyone has ever hinted that it could be tagged as “Web 2.0” (whatever that means). As much as I enjoy hanging out with them in our favorite IRC channel, conversations there are sporadic. Most of the time, that club house increasingly looks like an asynchronous forum for short updates posted infrequently on a synchronous medium… Did I just describe microblogging ? Indeed it is a very similar use case, if not the same. And I don’t want to choose between talking to my close accomplices and opening up to the wider world. So I still want to hang out in IRC for a nice chat from time to time, but while I’m out broadcasting dents I want my paranoid autistic friends to get them too. To satisfy that need, I need to have my IRC voice say my dents on the old boys channel.

The data source could be an OpenMicroblogging endpoint, but being lazy I found a far easier solution : use Laconi.ca‘s Web feeds. Such solution looked easier because there are already heaps of code out there for consuming Web feeds, and it was highly likely that I would find one I could bend into doing my bidding.

To talk on IRC, I had previously had the opportunity to peruse the Net::IRC library with great satisfaction – so it was an obvious choice. In addition, in spite of being quite incompetent with it, I appreciate Perl and I was looking for an excuse to hack something with it.

With knowledge of the input, the output and the technology I wanted to use, I could start implementing. Being lazy and incompetent, I of course turned to Google to provide me with reusable code that would spare me building the script from the ground up. My laziness was of course quick to be rewarded as I found rssbot.pl by Peter Baudis in the public domain. That script fetches a RSS feed and says the new items in an IRC channel. It was very close to what I wanted to do, and it had no exotic dependancies – only Net::IRC library (alias libnet-irc-perl in Debian) and XML::RSS (alias libxml-rss-perl in Debian).

So I set upon hacking this script into the shape I wanted. I added IRC password authentication (courtesy of Net::IRC), I commented out a string sanitation loop which I did not understand and whose presence cause the script to malfunction, I pruned out the Laconi.ca user name and extraneous punctuation to have my IRC user “say” my own Identi.ca entries just as if I was typing them myself, and after a few months of testing I finally added an option for @replies filtering so that my IRC buddies are not annoyed by the noise of remote conversations.

I wanted my own IRC user “say” the output, and that part was very easy because I use the Bip an IRC proxy which supports multiple clients on one IRC server connection. This script was just going to be another client, and that is why I added password authentication. Bip is available in Debian and is very handy : I usually have an IRC client at home, one in the office, occasionally a CGI-IRC, rarely a mobile client and now this script – and to the dwellers of my favorite IRC channel there is no way to tell which one is talking. And whichever client I choose, I never missing anything thanks to logging and replay on login. Screen with a command-line IRC client provides part of this functionality, but the zero maintainance Bip does so much more and is so reliable that one has to wonder if my friends cling to Irssi and Screen out of sheer traditionalism.

All that remained to do was to launch the script in a sane way. To control this sort of simple and permanently executed piece of code and keep it from misbehaving, Daemon is a good way. Available in Debian, Daemon proved its worth when the RSS file went missing during the Identi.ca upgrade and the script crashed everytime it tried to access it for lack of exception catching. Had I simply put it in an infinite loop, it would have hogged significant ressources just by running in circles like a headless chicken. Daemon not only restarted it after each crash, but also killed it after a set number of retries in a set duration – thus preventing any interference with the rest of what runs on our server. Here is the Daemon launch command that I have used :

And that’s it… Less cut and paste from Identi.ca to my favorite IRC channel, and my IRC friends who have not yet adopted microblogging don’t feel left out of my updates anymore. And I can still jump into IRC from time to time for a real time chat. I have the best of both worlds – what more could I ask ?

Chat is supposed to be realtime conversation – and it often is. But just as some corporate victims live in Outlook (that abortion that Microsoft shoves down user’s throat as an excuse for a mail client) some fellow geeks live with an IRC screen at hand. Those people useIRC for realtime conversation, but not only. Soliloquy is widespread, and having a client with at least half a dozen tabs that are as many parallel conversations is a common occurence. IRC users weremicroblogging before the term was coined and web interfaces imagined.

People come to IRC channels such as project channels to meet the whole group. But just as often they come there to hang out with acquaintances, which they find spread accross various channels. Wouldn’t it be great if each user could have his own channel with just his friends ? This is what microblogging is : a people aggregator, just as any feed aggregator but for the people you want to follow.

I have had a hard time so far trying to convince my IRC addicted friends that we should use a Jabber MUC chat room in lieu of our usual IRC channel. Jabber MUC is superior to IRC in every way possible, but as much as we like to rail against the common user’s inertia to technological adoption, we are sometimes no better.

I believe that the problem was that Jabber MUC provides only marginal incremental improvement to their usage. And adopting a microblogging service is a huge stretch from their current use cases. I have therefore long been dreaming about a chat interface to microblogging that would meld the social power of microblogging and common chat usage patterns into a workable migration path for my IRC addicted friends. And there it is :

From a user’s point of view, Identichat is about joining the Jabber multi-user chat at your_identica_user_name@identichat.prosody.im and you’ll immediately find yourself in a standard MUC room where the participants are your Identi.ca subscribers. The conversation is the microblogging stream that you would normally get at Identi.ca.

If you try to enter a notice, a help message in the chat window points out that ‘You can register using your identica account by sending !register username password’. Do that – not ‘/register’ as I mistakenly typed out of IRC habit – and you are set to use Identi.ca as any chat tool.

Identichat will help Laconica by eroding chat user’s resistance to change. And it could also foster new uses of microblogging as a thick client enables considerably accelerated interaction compared to a web interface. For now it could be faster – the turnaround latency is perceptible compared to IRC or XMPP MUC, and a helpful “line too long” message would be better than “Send failed : error 406″. But I’m nitpicking : Identichat is a wonderful tool that gives new faces to the microblogging infrastructure. An infrastructure that can show different faces to different classes of users has a great future !

If I haven’t convinced you yet that Linux is going to take over the appliance world, I strongly suggest you look at Sony’s web site. There you can find a page full of television models going back to 2003, all of which run on Linux (for those essential moments when you must have the source code to your television, naturally).

Sony is such a big group that the right hand does not know what the left hand is doing – so I am not surprised that Free software is being used, although Sony‘s attitude toward it has had both highs and lows in the past.