I fear insurance. To be precise, I fear the paradoxical risks of ubiquitous data-driven “insurance-ization.”

We are just beginning to deal with the simultaneous onslaught of exponentially growing amounts of biomedical data, dirt-cheap analytical processes, and powerful financial pressures across all industries. As previously hidden health risks in specific individuals become to become visible even from before their birth, insurance providers, governments, and people must deal with issues of adverse selection and uninsurable risk.

My concern is that there isn’t anything in this model that restricts it to healthcare as such. Consider employment. To be sure, employers have always sought reassuring signals in the form of Harvard degrees and clean rap sheets, and as biological, social, and work performance features become even more widely and cheaply available, it will be impossible to prevent this data from being gathered, models of “work performance based on Facebook friending patterns” being developed, and then…

Well, the bottom line will be an increased pressure for homogeneity. After all, most businesses are fiduciarily required to reject avoidable risks, and if industry standard human resource analytics say that people with less than two new Facebook friends each week are less productive on average… It won’t be a breach of any existing anti-discrimination laws, and yet it will end up being an homogenizing pressure.

Never mind that this kind of models are seldom worth much, if at all. The value of a standard, business- and politics-wise, lies more on it being an standard than on whathever predictive value it might have. A case in point — the unexplainable existence of a no-fly list of people “too much a terrorist” to fly and yet “not enough of a terrorist” to arrest.

There is much research dedicated to signaling phenomena; roughly speaking, those things organisms do (like getting into the “right clubs”, or carrying around a big, cumbersome feather tail) not due to their usefulness, but to show to others that they can, and hence suggest other presumably related qualities. This leads to often subtle dances of signals and countersignals, deceit and traps, which is a big part of our biological and social life.

I fear, however, the data-driven expansion of this signaling behavior to a much larger area of our lives. Constantly tailoring your online behavior (and what behavior is nowadays purely offline?) to convince vague distributed entities that you are what an average-of-averages HR department would describe as normal is dispiriting enough when those entities are people; sleepless, tireless digital algorithms — if an already present sense of risk-aversion leverages itself on quantified models for the hiring and promoting of people — will be a serious setback to our quality of life.

Fear not! Sure enough data-mining technologies and techniques by employers and insurers will, and do, lead to the scrutiny you fear, even now. However, transparency works both ways, and so too accountability?

In an evolving global social community, it will thus become immediately transparent that these HR policies promote employment of drone ants for factories, creative pragmatists for R&D, and Self-serving psycho’s for CEO’s?
This greater transparency will provide yet greater accountability, scrutiny and legislation for equality, act against social and biological prejudice?

I do share your concerns where healthcare insurance scrutiny is concerned, yet it takes only days for legislation to be installed to overturn unfair discrimination, IF? that is what society, peoples and democracy demands?

I envisage that data-mining, info sharing, transparency, and accountability will promote ebb and flow, (push & shove), of global social evolution and further democratic progress, ultimately?