Mugshot – aggregates a limited number of sources, doesn’t seem to update properly from del.icio.us, has conversation features (quips, comments)

FriendFeed – nice look and feel, a limited number of sources, has conversation features (comments, ratings)

Profilactic – by far my favourite in terms of look/feel and sources (you can add anything that has a feed) but no conversations as yet

Lifestreams are fun. I don’t expect anyone to care about what I just played on last.fm (and likewise), but these are all ways of broadcasting yourself and making connections. Read Deepak’s post for some thoughts on how this might apply to science.

17 thoughts on “Lifestreaming”

Yes, absolutely! Been thinking about this over the last couple of months, particularly with respect to tracking people in the lab and connecting a datafeed from an instrument (say a PCR machine or an autoclave) to a person. The guys at Southampton have already put together the blogging lab (see http://chemtools.chem.soton.ac.uk/projects/blog/blogs.php/blog_id/24 for an example). Now if we could connect that up to, say, a Twitter feed (‘I am doing a PCR reaction:Date, time’) then we may be able to reconstruct what happens in the lab at a very detailed level. We are thinking of applications in safety (recording the autoclave is working properly) but this will also be great for general trouble shooting (did the PCR machine do what it was supposed to, has someone fiddled with my programme, what was the temperature in the incubator etc etc)

They can if you can get an RS232 feed off the back of them and send it out to the world. Our new eppendorf thermal cycler has an RS232 and RJ11 ports. Not sure yet what we will be able to do with them but the guys at Soton have pulled down data off other instruments and autoblogged it.

Unfortunately the blogject instruments at Soton are currently secured away unless you have a chemtools login which is a shame. But the Room Blogs give you an idea what is being done. At the moment this is mostly wrapped up in a blog post so you have the day’s data, or a single run, rather than a running commentary. Don’t see any particular reason why this isn’t possible, just hasn’t been the direction things have gone in yet.

A key question with this is what you would use the information for. We have thought about it mostly in terms of trouble shooting so packaging things is helpful. Still a twittering PCR machine would be a nice demonstration.

@Deepak. True but one of the advantages of doing this in a lab is that we have lots of ethernet ports going back to a breakout box. All that lovely twisted pair means we can run a lot of things to a lot of places, in a lot of different formats. That said I have been losing the argument over getting more data points in the new lab so wireless will help. As does everyone having a smart phone.

The fun bit to me is how this gets linked up with everything else so that a text message from me to a colleague (or perhaps to the PCR machine?) also gets tied in and logged. I could do this at the moment with the Life Blog application on my Nokia phone, but you’d get an awful lot of texts from me to my wife saying which train I’m on. I’m not sure that is insider information that I am ready to start giving out :)

I’m not sure that is insider information that I am ready to start giving out

On that topic – do any staff/students express privacy concerns when you float these ideas around? I can imagine some individuals don’t like the idea of their daily activity being monitored: “George did nothing today” wouldn’t look too good on the workstream. On the other hand, it might become competitive and be good for lab productivity :)

Of course, one will run into the same challenges (probably multiplied) that one does with lifestreaming apps … lots of noise. So in the end we need some smart infrastructure that recognizes important signals (e.g. a signal that says, hey your machine quality over the past few days suggests that something is wrong .. check it out), or hey xyz is pushing the system beyond recommended limits, or combination of A and B is your best bet for optimal resultss

It would be nice if instruments could twitter in a fixed vocabulary, and they listened to each other’s streams. Then they can understand what each other is doing. For example machine B knows machine A has finished it’s job, so it switches itself on to be ready for the next step in the experiment.

@neil Mostly I don’t get these kind of concerns because I’m the person in charge so people don’t want to come to me asking how to hide the fact that they are being lazy :) Actually it is a serious issue of privacy and we do need to consider whether some of this ought to be anonymised. Its another reason why it is better for the object to be the blogger than the person using it.

I haven’t proposed tracking everyone’s position yet either although its an obvious extension. At the end of the day its a question of balance between what makes people’s life easier and how much invasion into their ‘privacy’ they are prepared to accept.

@Deepak. Yes, and this is why easy tagging is important. Steve Wilson has done some nice stuff on inferencing what is going on from simple sensor data (PIR switches in his case) and once can set specific triggers as well (monitor the -80 freezer continuously but if it goes above -60 then start setting off alarms/texting people)

@Mike. I don’t think that a fixed vocabulary is what you want but flexibile wiring tools that act on data feeds and web services. Then a workflow could be set up that would listen to a data feed, and on a specific event, trigger something else through a web service. This links to some of the stuff Deepak has been talking about re: programmable web. If we go down the fixed vocab route we will spend the next 10 years arguing over things. If the instruments spout out some reasonably well structured xml that is human readable then we (royal we, I’m just an end user here) can develop tools that will allow people to wire things up.