Speech recognition in the cloud has given companies like Apple and Google a reason/excuse to gather masses of training data. They have put it to good use: speech recognition is much better than it was. If you like speech recognition, use it, meanwhile donating your data and helping the rest of us. If you don't, don't use it. As long as users are aware of this, I don't really see the problem.

Really? Have you tried Gnome 3.4? I installed it two weeks ago (on Ubuntu), and I haven't looked back. Maybe you've been bitten by previous versions? It seems GNOME 3 is like KDE 4: they need a couple of versions to get it right. (And an additional couple of versions to get the mindshare back.)

I love how GNOME3 gets out of your way while you're doing useful things, and how at the same time any context switch (starting a new application, or going to an existing window) is just one key press away. Even the colours are well thought-out: all context-switching stuff is in black, content is mostly bright.
The way Gnome3 handles IM is really exemplary: messages turn up at the bottom of the screen (in black). You can ignore them, or mouseover and type a reply. As soon as you move your mouse away, the conversation disappears and you can go on with what you were doing.
Does anything else beat this in terms of productivity?

I'm surprised that there seems to be this Slashdot groupthink against new desktop environments on the basis that they are different from Windows or Gnome2. Surely most people posting here have an IQ that allows them to learn a new interface? And actually come up with meaningful arguments?

Disclaimer: I'm not a patent lawyer, but I do know a thing or two about speech recognition. I've only read the summaries of the patents, but they don't seem to cover anything that Siri or any other sensible speech recogniser system does.

Patent 7,266,496 (from 2007) is about a complete speech recogniser on a chip. This couldn't be further from Siri, which sends the audio data to the cloud to be recognised. The four "modules" that the patent covers are bog-standard. Patent 7,707,032 (from 2010) describes a silly way of doing speech recognition (by comparing with individual training samples) and is unrelated to any modern commercial speech recogniser.

This is just some Taiwanese university hoping for two minutes of fame and a settlement. They seem to be getting their fame, but they're not going to see any money.

A big factor is how well the programmer knows their stuff. I basically run the same test, accidentally, when I ran a course on a university computer science curriculum. Graduate students had to program a small speech recogniser in C++, Java, or Python. I measured the speed of the entries on a standardised task. My reference version, in C++, was fastest, no surprise there. Second and third were students' programs in Java and Python (!). The very slowest program was extremely convoluted C++ code, and a factor of 20 slower than the fastest C++ version. The Java programs showed less variance: they were bland, but all reasonable. The entries in Python were concise.

My conclusion is that the performance advantages of C++ are easily offset by its ability to confuse the programmer. C++ is arcane and complicated; Java is more limited; Python is a lot cleaner. If you know exactly what you're doing, then C++ can be the fastest option, but in real-life situations, with deadlines and average programmers, don't go there.

This paper shows why psychologists should not touch computers, let alone write papers about it.
Hyperlearning may be a cause of schizophrenia in people, but this paper shows nothing of the sort. The learning rate in artificial neural nets determines a step size for the optimisation of a function. You need small steps because at each step the learning algorithm (gradient descent, or error backpropagation in this case) assumes that the function is linear. So neurons in the human brain assume piecewise linear behaviour in their neighbours? Of course not. The authors are just clueless about the mathematical model that they use.

You may not realise that speech recognisers need training data. And there is no data like more data. A year ago someone from Google told me that they trained their recogniser on 1000 hours of voice searches. If every utterance is a couple of seconds long, that's a lot of recordings. When you do a voice search, you can select from a number of recognition hypotheses. This is how they get transcribed data.

They also need to train on your voice specifically before you get decent recognition performance. I found that after a while my phone became surprisingly good at decoding my speech.
I do agree the privacy aspect is a concern, but in this case at least you benefit personally from Google storing your data.

I'm doing my PhD on speech recognition. I think (and hope!) it's neither dead nor fully developed. Currently, changes of environment screw speech recognisers up. Different speakers, background noise...
A trick that I heard has been used for subtitling television broadcasts is to have someone re-speak the words (which is not that hard). You could play the audio recordings on your headphones while repeating them into a microphone. If you're in a quiet room and the recogniser is trained on your voice, that may get you most of the way. You'll still want to correct transcriptions manually.

I don't know of any good trained open-source speech recognisers. There are open-source back-ends like Sphinx or HTK (which I sort of work on) but you need massive transcribed training corpora to train a speech recogniser. This is expensive which I guess is why open-source speech recognition hasn't taken off. In the speech recognition group at my university, most people use Linux, and I don't think anyone actually uses a speech recogniser in their daily work.

The education system, I'd say across the world is completely outdated and is a perfect example of a government run system.

Let me guess. Your world is constrained to North-America?

Even with all the technological advances available to schools, we still use the 17th century lecture style instruction method across the globe. We cram 30 students into the room with 1 teacher, and force everyone to learn at one pace: from the smartest to the dumbest.

In countries like the Netherlands and Germany, there are three or four different tracks for students aged 12-18. Around 15% follow the "pre-university" track in the Netherlands. It worked well for me, for exactly the reasons you give.

You wont get this though. Because we live in a world that demands "social justice" aka: forcing the smartest to be clumped in with the dumbest and the laziest.

I don't know where you get your idea of social justice from. Social justice would be to mix rich and poor in a classroom.

2 - Republicans don't go to war more then Democrats. Both parties voted to go to war. People seem to forget that polls showed that US citizens, as well as many of the world supported going into Iraq immediately after 9/11 on a false premise that Saddam had ties to 9/11. Bush pushed for diplomacy and intel. That intel concluded that Saddam had no ties to 9/11. A warmonger strikes while the iron is hot, not pushes for diplomacy for a few more years.

I can't stand to see such blatant deception moderated so highly. Bush and his cabinet pushed for war, and manipulated intelligence to make it look more desirable. No one ever suggested that there was a link between Saddam and 9/11; rather, Bush's administration manipulated evidence to falsely suggest that Saddam had weapons of mass destruction.