Recently, there is a lot of talk about the Android version, named “Honeycomb” for tablets. A video was posted with some sneak peek of it.

The problem is that, to me, it looks already outdated. If you compare the look of the final Honeycomb with the pre-alpha of Meego, you’ll have the impression that the first actually came before (again, remember that this is the pre-alpha version and the final version, released last year, it a bit more polished).

The weird thing is that if you compare Android 2.2 running on a smartyphone with Maemo running on the N900, Maemo is the one that looks outdated.

Personally, as a software developer, if Meego keeps the Maemo tradition of not hiding the hardware from the developer, you can expect that Meego will have some crazier things running on it.

On the other hand, since Meego is being directed by Nokia, I kinda expect that the life of Meego will be hell (with my experience with multiple versions of Maemo).

In January this year, I wrote about why the iPad matters. There, I pointed that a lot of changes would come to the digital world since it appeared.

Recently, the iPad was officially launched in Brazil. Now you don’t need to import it and pay huge taxes for it; you can go to a local shop and buy it, paying the huge taxes for it.

There is only one problem with it: All reviews that people post here about it are translations of American articles, saying how awesome the new iBook Store is, how now you don’t need to carry books around, how you can easily watch your favourite TV series on Hulu and get movies from Netflix, buy the soundtrack of the movie or the new album of your favourite artist on iTunes Store… In short, all the good things about having a slim notebook where you won’t type much.

The problem is: Nothing of this is available here in Brazil. So, in the end, the iPad is nothing but a huge iPod Touch. And when you point that, people get pissed.

I mentioned that on Twitter to someone that posted a translated article from IT World (I think, can’t really remember right now) which mentioned all those good services you can access but are only available in very selected places of the planet. Their answer? “The iPad is an awesome device and people that say it’s a huge iPod never used it or don’t like it ;)” (yes, smiley face and all.)

First of all, I used it already. My aunt have one and I’m still trying to figure out how she uses it. I like the bigger virtual keyboard compared to the iPod Touch, and the huge screen to check websites, but that’s it — exactly what a bigger iPod Touch would do. Second, if you read my original post, yes, I do like the iPad because what it means. So neither points were valid, to start with. But this guy had to defend how awesome the device was, doesn’t he?

That’s when I pointed that a small netbook would do the same, for much less money ’cause, in the end, all you have is internet access to read the local newspaper online. And any device with connectivity would suffice, including a recent iPod Touch (as long as you have a wifi around) or even an iPone 3GS, which would do much more than the iPad for around the same price.

So no, it’s not that I don’t like the iPad or never used it. The problem is the tiny minded people with money that don’t want to share their things with the world and put geological barriers on a bondariless technology. And while those barriers are still up, the iPad would be just a huge iPod Touch on everywhere except the USA.

PS: Just one thing: I used the iTunes Store in Australia and as a digital distribution system, it’s awesome. The problem is that you get crippled versions of most albums instead of the full thing. One example is the soundtrack of “Across the Universe”. I bought it from iTunes Store Australia, only to find a few minutes later that the American version have 5 or 6 tracks more. So the barrier is still there.

One day after I posted the lack of respect for OSS apps in Twitter “ecosystem” (a word they seem to like a lot these days), they announced that the API request for trending topics will now return the promoted tweets.

As I was reading this announcement, the paranoid hat fell on my head.

As I explained before, OAuth allows Twitter to simply cut the access of an application at their own will because, as they say, it may “harm the ecosystem”. The problem is that since forever, the line of what is “harmful” to the “ecosystem” was never fully explained and seems to be a moving target: Every day, someone will post on Twitter-Developement-Talk about their applications getting rejected due some not explained reason and being pointed to the Terms of Use.

Now, since Twitter never really explained what the “ecosystem” is and can simply cut any application they want, what may stop them to cut the access of some application that removes promoted tweets in the basis that promoted tweets are good for the “ecosystem”?

Twitter, the biggest microblogging tool around, decided to change their policy to applications and it’s making it hard to OSS developers create applications that can be as good as the other applications.

First, let me explain what is the problem they are trying to solve, how they are trying to solve and how this will make the life of OSS developers harder.

How things work today?

Today, applications can use the Basic Auth, which send your username and password to Twitter, which checks and, on success, returns your messages, direct messages, post your update and so on. The flaw in this is that someone could be “listening” to your communication and easily guess your username and password. Or you computer could get hacked, attackers could just retrieve the file with your password. And then, one day, you wake up and see some of your updates saying, for example “Buy viagra” or “I liek cocks”; not good.

Solving the password stolen problem with OAuth

To solve the problem of someone stealing your password, Twitter decided to embrace OAuth for two reasons: First, you store an authorization token on your side and not your password, so if you your computer gets hacked, they still don’t have your password. Second, if one application misbehaves, you can remove its permission to post and you should be all good.

On top of that, for applications that are very very naughty, they can completely revoke your application access. Why? The logic behind it is that spammers don’t really care if their spammy applications are misbehaving, as long as they post spam all the day. It also makes the spammers life harder by forcing them to create accounts manually (which they do already) and applications manually, or a group of fake accounts could suddenly stop working ’cause one single application was revoked.

And where is the problem?

Basically, to avoid someone to listen to your communication and use your authorization token, the application must have an identification and a secret token, which is used to encrypt the authentication token and message signature. So, even if your computer is hacked and their stole your authorization token, they still can’t use it ’cause they don’t have the application secret and, that way, can’t sign the messages as being that application.

So Twitter said to all developers today: “Never share your keys! I”

And here lies the problem for open source developers: We were forced to chose amongst two options:

First, we could follow Twitters idea and not share the application keys with the application itself. For a user to be able to use the application, then, they would have to register they application themselves, with another name. For an experienced user, it may be ok, but for users that simply want to read new messages, going all the way of registering an application, knowing if it is a desktop or a browser app, provide some URL and so on it’s too damn complicated. Most users would simply forget about, and think that their friend’s application, which is closed source, is way better.

Second, we ignore Twitter’s recommendation and distribute our application with our keys. In this case, we can either suffer from someone taking those keys and spamming Twitter, thus revoking the application secret and letting our users without any access till we provide a new secret; greately reducing our users protection ’cause their authorization tokens can be easily exploited in case their computers get hacked; or, simply, Twitter decides that since we are providing our keys publicly, and that’s bad for the ecosystem (because of the two previous maybes) and revokes the application anyway.

In summary: Either we give applications with a terrible user experience or we have to bite the bullet and give our users an application with incredible reduced security for them (or that, one day, will simply stop working even if the community of users around it behaves nicely, just because someone took the keys and abused the system.)

Twitter came with a solution for open source applications that, basically, mimics the application registration thing: The application is marked by them as open source, so we would have access to another URL, which basically registers a new application with your application as template, gets a new application secret and identification, returns to you and then you keep using that from now on. So, in case the secret is hacked, only one application is compromised and only one application is blocked. But that won’t be available on the day they will kill the basic authorization. So there will be a gap where open source applications and their users will be completely vulnerable to attacks.

Personally, I hate this instance from them. With Mitter, I always aimed for a simple application that would be easy to use and secure, whenever possible. Their current position forces me to chose one in favour of the other.

Damn Google: When you are going to learn to not hide stuff from users? What about stop being lazy and write a proper crash report tool and a proper update tool, instead of doing those things hidden from the user?

And yes, I’m tired of ksurl showing out of nowhere and basically killing my slow connection at home, with Chrome not saying a damn word about it.

After unsuccessful attempts to make KDE or GNOME usable on my work computer, I decided to try XFCE again. It was exactly what I was looking for, but that’s not the point of this post. While messing with its configuration, I found the window border settings and I noticed that some of them are, actually, copies of old Window Managers, which brought memories of the whole time I spent with Linux in the past.

Here are the ones I remembered (and just keep in mind that it was a long time ago and it’s from a user perspective, so times may be off a bit.)

CoolClean

When GNOME appeared, it needed a Window Manager for themselves. Enlightenment was the chosen one and it came with this theme as default. I can’t really remember if it was the default window manager for the very first GNOME 2.0 or it was used before.

MicroGUI

Due Enlightenment lack of options and general bad behaviour (although it had the most flexible theme engine at the time), GNOME decided to switch from it to Sawfish (at the time Sawmill, but they had to change the name due copyright problems.)

Atlanta

Sawfish, although highly configurable, was not in line with the general philosophy of GNOME and Havoc Pennington started working on a new window manager based on GTK, named “Metacity”. Atlanta was one of the themes it had in the first release.

Eazel

At some point, a new Linux company appared to help improve GNOME in general. This company, named Eazel, was working on a new file manager for GNOME and a new theme for the window manager. The company went bankrupt a few months later, with an unfinished file manager that was pick up by the community and turned in Nautilus.

Curve

With GNOME finally becoming more popular and turning into the default DE for most distributions, they had to find a way to differentiate them from the others. RedHat, still working with workstations at the time, created a theme named “Curve”. It was pure pain to everyone not using anything that didn’t use a RPM-based distribution, ’cause the only way to get the theme was to install the RPM itself or convert it to something else. Much time later, RedHat would leave the workstation business, focus on servers and create Fedora to take care of their old market.

Gorilla

From the ashes of Eazel, a new company apparead, named Helix, later renamed to Ximian. As the other companies, they created a theme to differentiate them from the others.

Edit: Jakub Steiner, the famous Jimmac, pointed that Gorilla predates Metacity (a point that, honestly, I wasn’t sure). Gorilla was originally written for Sawfish and later (badly, it seems) to Metacity. Jimmac also send a link to a screenshot of the original Gorilla.

Industrial

Some time later, Ximian was bought by Novell and they created a new theme. Although their GTK theme wasn’t very appealing due lack of contrast on elements, the window manager theme was awesome. Unfortunately, the XFCE theme don’t follow the GTK theme colors, as the original Industrial did.

Edit: I just found the Industrial GTK theme. Its problem was not the low contrast: it was extremely flat, in a time were everyone was aiming for more rounder layouts.

The IT industry is in turmoil over a change Apple did in their iPod/iPhone/iPad license:

Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

Basically, what they are saying is “you will use our SDK and that’s it!” I’m not going to expand the point that about 90% of the people complaining about this change did not and wouldn’t ever write an App for the Apple store.

The good thing about this all is that Adobe thought it was a direct attack to their Flash platform (which I kinda don’t agree because I have my own conspiracy theories, but I can see their point) and decided to bash Apple. Apple (Steve Jobs, actually) decided to write a long response to Adobe. Yes, there are a lot of wrong points on it and I’ll let you read Thom Holwerda article about this.

If there is a lot of bashing around, why I think this whole mess is any good?

Well, first of all, Jobs is right about Flash: I’m tired of closing Firefox ’cause a Flash applet is burning my CPU just to show a small game of two guys trying to beat each other in eating bananas or because, apparently, the runtime is still running, eating memory and making Firefox slow. Flash is not accelerated in anyway in OS X or Linux, even if the technology is around for years. And Jobs claims about Flash will (or, at least, I hope it will) force Adobe to produce a decent runtime for Flash very soon. The more Jobs bash them, the better.

Second, we finally have a good discussion about the open platform of the future: the web. I can’t recall so many discussions about HTML 4.0 or XHTML 1.0 before this. And now we have a lot of people discussion the merits and weakness of HTML 5. “Can it do that?” “Can it replace this?” and such will only improve the draft even further. The “can’t”s is actually the best point of this all: If the W3C keeps an eye on it, who knows what new features HTML 5.1 will have?

As a side note to the HTML 5 discussion, it seems that some companies are already aiming products that will use HTML 5 features (Google seems to be pushing better features for HTML5-capable browsers, although the look and feel is still the same) and I expect that in a few months, some sites will display the dreaded “this page requires [browser X] or superior” what we saw in the 90s. But it will be for a good thing: old, bug ridden browsers will not display things properly and people will be force to drop that in favor of newer, better browsers. And not only that, but the hidden “you need that browser because we put something that only that browser supports” will be replaced by “you need that browser because we put something that only the new, open standard supports it”.

Third, still part of the HTML 5 discussion, we have the h264 codec discussion (which is the codec used to transmit videos on the web in HTML 5.) Jobs position of the “open web” pointing h264 is just bringing more and more discussion about the patent encumbered codec. The more Jobs hits the point about this, the more people will point that h264 is not an open codec and that, sooner or later, some company may screw the whole internet because they got angry with someone and decided to revoke all licenses.

The whole Adobe vs Apple discussion is awesome for the open web, because both companies are pointing exactly what’s wrong with the current situation.

So Apple announced yesterday their new product, the iPad. Some people call it table, some people call it a big iPhone/iPod touch, some call it “balloon boy”…

But, in the end, it’s a game changer. Not directly, but it put the seed to change a lot of stuff.

PDAs
If you had any hope PDAs would come back, well, forget it. Although most of the smart phones have PDA features, their small screen isn’t so good for most of the stuff the “real” PDAs do. The iPad big screen (compared to most smart phones), with it’s non-really-tiny keyboard (even being virtual) kills most of it.

Kindle
The Kindle seems to be the first target of the iPad and Jobs even said the iPad wouldn’t exist if it wasn’t for the pioneer work from Amazon and now they would “stand on their shoulders.” Well, at the first look, it doesn’t look so much of a challenge:

The Kindle screen offers higher resolution (824×1200 vs 768×1024) and have a better ppi (150 vs 132.) And let’s be honest, when you’re reading a text, it doesn’t matter if the screen is gray scale or color, it’s black text over white background.

So, why the iPad affects the Kindle market? First of all, the iPad is not just a eBook reader: It also have a browser and email client and, althought Kindle also have a browser, it’s fairly limited. So, when you count that you have a small device that can do more than just read books, it may be worth paying twice for it.

In the very heart of the situation, though, is the fact that Apple is selling books. Let’s be honest, the Kindle is nothing more than a vechile to Amazon sell books without worrying about the logistics of sending a bunch of paper sheets with ink on them to a person somewhere in the globe. Apple iBook store will go head to head with Amazon on that and, after the 1984 fisasco, it’s image is somewhat scratched. And let’s not forget that Apple managed to convince a bunch of corporate luddites that music can be sold without DRM (even after selling them with DRM for a long time — I know, I was there when they switched.)

Netbooks
Small form, can connect on most WiFi networks… Sounds a bit like a netbook, doesn’t it. Well, not a first glance. A netbook like the Dell Mini 10, which comes with 160GB (10x more than the entry level iPad), 11.6″ screen (against a 9.7″ screen) may sound like an undisputed winner, specially when it costs $399 against iPad’s $499. But when you think about what people do with Netbooks, it mostly email, web and text editing. But when you add the latest Windows version, it’s price jumps to $520. And it can still go higher if you replace Microsoft Works (bundled) with the latest Microsoft Office.

Apple redesigned their iWorks suite to fit the small screen of the iPad. And they are offering each of the 3 applications (Pages [word processor], Numbers [spreadsheet] and Keynote [presentation]) for $9.90 each. So you can get a small office suite for about $30. Which is around the same price for the Dell Mini (although you’ll have to deal with a virtual keyboard instead of real one.)

And really, I don’t think the harddisk size actually matters that much. Most people that use a netbook for email, web and small editing really don’t go that deep into the 160Gb (which is mostly used by the operating system itself.)

Not saying that the iPad is a clear winner, but it has a nice place in the netbook market.

Telephony
Wait, what? Telephony? What the hell!

Well, it’s one of the small gems hidden in the iPad. Together with the launch of the new device, Apple is releasing a new SDK, version 3.2. This version removes the restriction of VOIP applications.

Now think about it: You have a VOIP application that can run on your Wifi (and 3G) tablet and on your 3G phone (since the same OS runs on both iPad and iPhone/iPod touch.) This is big. With the price of a data transfer, you can talk to anyone in the world, anywhere you are. Old telephone companies must shiver with the prospect of landlines going to be canceled ’cause people won’t need them anymore.

(Edit)MID
MID (Mobile Internet Devices) is an area where Nokia pushed a lot. The N900 is the latest of that line of devices, which started with the N770 and, as far as I know, it’s the most famous (and successful) line of MID devices so far. Again, the iPad goes head to head against them and, due the screen size, I must say it’s almost a loss for Nokia.

On the other hand, if you remember that on every new series Nokia simply stop any support for the previous operating system (the N770 with Maemo 3 lost support when the N800 was launched and now the N800 with Maemo 4 is out of support with the N900 and Maemo 5), basically means Nokia shot itself pretty good in the foot. If only they cared about their older systems (the first iPhone STILL can get the new OS) they might had a chance. But too late.

So it’s all good?
No, not at all. The iPad, although (as I believe) is a game changer by concept, it’s new that big in the real world.

First of all, it’s the lack of multitasking, which is, let’s be honest, a stupid move by Apple. It have the power to do so, but it doesn’t. It doesn’t make any sense. It’s like buying a Ferrari and going all around on second gear. The only hope is that, at some point, Apple releases an OS that it’s capable of multitasking properly (if not, it will have to be jailbroken.)

Second, it’s the centralized model around the iTunes Store. As an old user of it, I thought it was really amazing that I could get music easier than pirating it. But it’s not all roses about it: I was living in Australia and the Australian Store, although selling the soundtrack of “Across the Universe”, didn’t have the full version of some albums: Most of them are only complete (2 discs and all) only in the US store. And, worst of all, there is absolutely NO WAY of buying ANYTHING in Brazil. This is completely stupid. And you can believe some more stupidity may come, like not being able to buy some books in the original language due your region (or worst, no books at all.)

Third, no Flash. Oh wait, that’s actually a good thing. ;)

(Edit) Fourth, the lack of ports. For everything you need to connect on the iPad, you’ll need a converter. A huge mistake here. Imagine if that came with a simple video output. BLAM! Install Keynote and you have a nice presentation tool to carry around!

Summary
I really believe the iPad is the start of a new generation of computing devices. I want my PADD and walk around the Enterprise with things to show to the captain. But the centralized model Apple insists on pushing may do more harm than good (well, maybe not at their home.)

(Edit) In case you’re asking yourself “so, he means I should get one or not?” the answer is “no”. I’d like to get one myself ’cause I’m a gadget guy (I walk around with a phone and an iPod touch, sometimes I carry my N800 with me, I have a Palm T|X in a box, a GPS thingy somewhere and just thrown away one of the first iPaq models ’cause it was not working anymore) but I’m pretty sure I’d save the money to buy something else. At the same time, as it’s the first iteration of such line of devices, I guess it’s better to let the people with huge piles of money to buy it right now and wait for the next generations. Unless, of course, you have huge piles of money or is a gadget guy (with some money to spare.)

One of my friends like to use the expression “balloon boy” to everything that gets a lot of attention but it turns to be a lot less interesting in the end.

Go is a new language created by Google that recently went open source and generated a lot of buzz in the interpipes.

As someone who have been working as programmer for almost 20 years and worked with almost a dozen languages and, on top of that, have a blog, I think I’m entitled to give my biased opinion about it.

One of the first things that got me off was the video pointing that the language is faster. Or the compiler is. Honestly, pointing that you become more productive because your compiler is fast is utterly wrong. If you’re aiming for a new language and you want people to be productive with it, make it so it’s easier to write code right in the first time. If you need to keep compiling your code over and over again till it does the right thing, you should probably check if there isn’t any impairment in the language itself that prevents right code to be written in the first place.

Which brings us to my second peeve about Go: The syntax, as presented in the tutorial. Syntax, in my opinion, is the biggest feature any programming language have to offer. If the syntax is straightfoward and easy to understand, it makes easier to have multiple developers working on the same code; if the language allows multiple “dialects” (or ways to write the same code), each developer may be inclined to use a different approach to write the code (which basically does the same thing) and you end up with a mess of a code where most developers would feel like rewriting than fixing a bug or adding a feature.

The first thing that caught my eye was the “import” statement that at some point uses a name before it and a block in the second example. Why two different ways (well, three if you count that one is probably optional — in the middle of the statement, nonetheless!) to import other packages with the same command?

Variable declaration also feels weird. “a variable p of type string” is longer to read than “a string p” (comparing var p string := ""; with C way string *p = "";). And that goes on. If you keep reading the statements in their long form (expanding them to natural English), all commands start to feel awkward and adding unnecessary cruft on the code, things that could be easily dismissed and force people to do less typing.

The “object” interface seems derived from JavaScript, which is a bad idea, since JavaScript have absolutely no way to create objects in the proper sense. And, because object attributes and members can be spread around instead of staying grouped together, like in C++ and Python, you can simply add methods out of imports. Ok, it works a bit like duck-taping methods in existing objects, but still can make a mess if you add two objects in one file and people decide to add methods just in the end of the file: You end up with a bunch of methods about different objects all spread your code, when you could “force” them to stay together.

So far, those were my first impressions of the language and, as you can see, it was not a good first impression. Focusing on compile speed instead of code easiness/correctness seems out of place for current IT necessities and the language seems to pick some of the worst aspects of most languages around.