The key change happening because of the Internet of Things is the move towards an actuated internet, one in which the data we collect is used to make changes in the physical world. This most-commonly comes with the tag “cyber-physical systems” but it seems that the focus is usually on the ‘things’ – buildings, heating systems, transportation networks – infrastructure generally. But there is clearly a human side of this that is just as important – perhaps more so.

Some research at Cambridge, which made the news recently, looked at correlation between what we ‘like’ on Facebook and likelihood of sexual preference, participation in illegal activities etc. We are not yet at the point where this inferred information is being used against us, but there are examples where similar inference is being used and there is nothing to stop this happening more and more. In some ways it’s the modern equivalent of how our credit rating is built up. In the future what will be acceptable for people to infer from our online behaviour and use against us? Would it be okay if we could infer from seemingly un-connected information who was going to commit crimes and take action before they did? This risks sounding a bit like the plot of Minority Report, but surely something similar can’t be too far away.

Up until now the link between what information is available and what is being inferred has always been transparent – or at least fairly easy to work out. If you don’t want pictures on Facebook to be seen by your employers then all you need to do is set your privacy appropriately. The recent case of Paris Brown’s past activity on Twitter coming back to damage her career is in some ways no different to a tabloid expose on a politician’s past misdemeanor. The same arguments about whether it is appropriate to expose elements of someone’s private life are similar in both cases.

I guess what this post is about is not the Internet of Things, it is still the Internet of People. The same tools which actuate/control our infrastructure can be used to take action when the data comes from us. But as systems become more and more capable of taking decisions automatically based on what they infer about us, the more we’ll need to be careful about the assumptions they are allowed to make.