I have been using the program TextAloud from NextUp since May 2005. Essentially, it is able to convert documents into spoken text. Saving this as an .mp3 file allows me to listen to scientific documents while I am commuting between home and work. Even though the concept is very nice, it does not work very well. Especially articles in .pdf form from journals need a lot of reformatting. This needs to be simplified. Also, the spectrum of available synthetic voices is quite limited. Even though AT&Ts Natural Voices sound very well, there is a lot that still can be improved so that it becomes more pleasant to listen to. Furthermore, having an intelligent combination of a good computer-based text summarization (e.g. the Copernic Summarizer) with text-to-speech should make it even more useful to convert scientific articles that one normally would not have time to read, into audio to be listen to in the car.

Inspired by the work of Markus Rauschhaupt et al. (MCRestimate / Compendium) I have switched to using Sweave for my data analysis work. While there are certainly many other approaches, this concept seems to be quite useful as it combines the writing of R code for data analysis of various molecular profiling data with the writing of the report and the conclusions drawn from the analysis. What is especially nice is that one single piece of combined analysis code and documentation code generates together a report. At the time of execution of the file the R code is run, the anlysis is carried out and the report is written. All in all, one little step towards ensuring reproducible research in bioinformatics that will hopefully lead to better knowledge in molecular biology as every single step can be verified by external people.

This is something I am still puzzled about... Under normal circumstances I would not just go to a person and start talking no matter what this person is currently doing. But with phones this is completely acceptable. All kinds of people may call one almost anytime and like Pavlov's drooling dogs we have the reflex to pick up the phone - Irrespective of how urgent or important the message of the caller is in comparison to what we are currently doing when the phone rings. I sometimes wonder whether "pulling the plug" for certain times is really the solution or whether we should not have a smarter system that let's us decide what to do with the call. We have got the "subject" line in emails and even have the concept of white lists to only receive emails from "authorized" persons. Yes, we have got caller IDs to see who is calling. But this does nothing about the rudeness of the interruption. Maybe we should transform our phones into "112" lines, send emails for the rest, but also increase our efforts to actually walk over to someone and have a face-to-face conversation.

Having transferred my recent digital photos last weekend and while doing the "mandatory" data backup, I wonder how many people will be disappointed in the coming years when after the global switch from analog to digital people will run into problems like these:

data on media that can not be read anymore due to lack of suitable reader

And this is not even considering the fact that the omnipresence of mobile phones / cameras tempt us to "generate" pictures or movies that we will not have the time to ever watch again - not talking about the time needed to organize / update and backup all these images and videos...

Technology advances lead to the generation of increasing amounts of data. The first wave of software only operates and/or captures the data generated by a given experiment / instrumentation. The second wave helps in data preprocessing and provides tools to identify statistically significant data points in the the whole data set. What is still largely lacking are third wave programs that are very simple to use by people with high scientific domain knowledge who are not necessarily computer-minded. As the second wave software often requires some level of programming skills (a good example is the increasingly popular open source software R - http://www.r-project.org) or a higher than scientist-normal level of data analysis (traditionally called "bioinformatics") skills, the flow from data analysis to data interpretation is interrupted. Third wave programs are needed to capture the scientific interpretation. Ideally such third wave programs should be build in such a way that they are aware of fourth wave software: the tools that ultimately generate knowledge and provide scientists with new hypothesis. For these programs to work effectively and to require as little human input as possible, third wave tools have to control and ensure that scientific conclusions are captured in a computer-understandable way. Many different things like controlled vocabulary, enforced use of official gene symbols (instead of the very many synonyms), enforced use of spell checking, potentially already providing the scientist with a view about how a computer sees his/her input, come to mind. This is a completely different approach to what many software companies nowadays proclaim as the next big thing: data integration. If the scientific interpretation is lacking, just integrating data will only have limited use.

While manufacturers of mobile computers focus on providing devices that come close to the performance of a regular desktop computer, I wonder who defines these requirements. The most useful mobile computer I have seen so far is the Psion Series 5. As it is not made anymore - it was launched by Psion back in 1997 - I am still waiting for a suitable substitute.

These are the elements I really liked - when will a modern device cover them all?

Instant on

Long battery life (I got days and weeks of use - not two or three hours

Small, but real keyboard (large-travel keys)

Touch-sensitive screen

Small size (much less than a notebook - just enough to have a decent keyboard)

Miniaturization combined with advances in computer technology lead to measuring more and more data of the same sample. All current molecular profiling technologies (e.g. transcriptomics, metabolomics, SNP analysis, etc.), but also techniques like RTqPCR (from single tube assays to multiplex to 96 well plates to 384 well plates to ...) or simply measuring ODs (one used to measure single ODs like OD260 or OD280, but now within a second one gets the whole spectrum) benefit from this development. While this looks nice initially, it makes clear how important it is to put efforts into providing people with the tools (software) to interpret this wealth of data. Currently technology providers are much faster in developing higher and higher data density per measured sample than statisticians are able to provide programmers with the right tools for data analysis.