Making Perfect Life?

Posted by: admin
Tags:
Posted date:
April 8, 2013 |
No comment

The blurring boundaries between biology and technology

Biology and technology are merging in many more ways than ever imagined. An ambitious technology assessment study, Making Perfect Life, has delved deeply into the changes this might entail. How can policy makers ensure these are mostly for the good?

Whatever lives is fundamentally different from dead matter, or so we feel. As 21st-century citizens of industrial societies, many of us still believe that things that breathe are more sacred than things that do not. We may know that organisms are complex systems that can be understood in terms of their components and processes. But deep down, many of us feel that within the humblest bacterium there is an indefinable something that has so far escaped our analysis. How can the wet stuff of animate beings and the dead stuff of human-made contraptions merge seamlessly into one?

Yet that is just what is happening in labs around the world. Scientists and engineers are increasingly blurring the boundaries between organic creatures and mechanical creations. Four technosciences are involved, each of them offering their own perspective and technological contribution. It’s known as the NBIC convergence – Nanotechnology, Biotechnology, Information Technology and Cognitive science. To span the organic-mechanical gap, bridges are being built from both sides.

‘It can be very hard sometimes to see the difference between a machine and a living being.’

“The life sciences are approaching their subject in a more and more technological way,” says Rinie van Est (Rathenau Institute, The Hague), project leader of Making Perfect Life. “Biologists are no longer content to study or even manipulate organisms, but are bent on building new ones from scratch. In terms of their ambitions, that’s nothing less than a revolution. Equally novel is how the physical sciences are nowadays drawing inspiration from the functioning of living things and trying to imitate them. The Blue Brain Project, for instance, aims to replicate the entire human brain in a computer system, the hope being that with a full understanding of its functioning, we will be able to build more powerful and efficient computers.”

Informed by this observation, the project team chose ‘biology becoming technology, technology becoming biology’ as their guiding concept. “These are two megatrends that will have a huge impact on the future of our society”, says van Est. “And in a way, the two are really one, because they reinforce or even help to create each other. It reminds you of the famous Escher picture of the two hands. Together, the trends represent a new engineering approach to life.”

Controversial science
Along with nuclear power, biotechnology has been one of the most consistently controversial fields of scientific endeavour in recent times. Even today, techniques such as genetic engineering and cloning leave many people uncomfortable and are subject to restrictive regulation, especially in Europe. Will the new megatrends of biology becoming technology and vice versa lead to a similar sense of unease and to new bans and guidelines? “There’s certainly reason to discuss the foreseeable consequences of the innovations very carefully,” according to van Est. “That’s exactly the debate we are hoping to have initiated with Making Perfect Life. And it’s not just that, say, medicine will be able to cure more diseases – that would be relatively straightforward. Several of the new technologies are already being eyed up by other industries. We call that ‘a change in social practices’, and though there’s nothing at all wrong with it per se, it does raise a whole range of new regulatory issues.”

So much for the general concepts – let’s get down to the nitty gritty. The researchers have distinguished four main areas where the megatrends of bioengineering are playing out. They’ve labelled them intelligent artefacts, living artefacts, interventions in the body and interventions in the brain.

Making Perfect Life is a study commissioned and funded by the European Parliament, as a project of the Parliament’s Science and Technology Options Assessment (STOA) Panel, under the responsibility of Malcolm Harbour and Vittorio Prodi, MEPs. STOA contributes to the debate on strategic scientific and technological issues of political relevance and the policy options for tackling them through projects of a medium to long-term, interdisciplinary character, as well as information and dialogue activities, whose outcomes are relevant to the European Parliament in its role as legislator.

Intelligent artefacts are machines that have certain lifelike qualities without being alive in any biological sense. They have chips, not genes; they have metal and plastic components, not tissues. This is worth keeping in mind, because otherwise claims and fears can easily be overblown. These artefacts are equipped with sensors to register a variety of signals, especially those emitted by human bodies. Our good old organic senses already have synthetic counterparts. Several sorts of sensors have been around for a while, but they still illustrate the trend of technology-becoming-biology. Human signals registered include sound, as in speech, grunts and squeaks, and mechanical signals, commonly known as movement and body language. Other examples include chemical, electrical and thermal signals.

Biologists are no longer content to study or even manipulate organisms but are bent on building new ones from scratch… Illustration: Petit Comitè.

From these observations, computer software determines what we are ‘doing’ and how we are ‘feeling’. An appropriate response is calculated, which could consist of words, actions or a rudimentary display of ‘emotions’. If all goes well, the human partner will indeed perceive these as adequate. (Sometimes, however, not all goes well: with more than one person in a room, systems can get individuals mixed up.) When these digital skills, such as they are, are uploaded to an ambulant device, you’re looking at a smart robot. When the skills are integrated into our living environment, the result is known as ambient intelligence. In both cases, we’re dealing with artefacts that are more interactive and closer to human than we’ve been used to so far.

With machines that respond adequately and in real time to our individual physical and psychological state, we already seem to be fulfilling the prediction that animate beings and inanimate contraptions will merge seamlessly. But an even smoother human-machine interface has hit the labs: neurophysiological computing. Here, just one thing is measured: patterns of brain activity. From these, emotions can be inferred. This is the technology steering thought-controlled wheelchairs, and which has also captured the imagination of computer games manufacturers and users. These new, intimate links between humans and machines are expected to find applications in three fields.

Sensitive interaction
The first of these are computers skilled in sensitive interaction, enabling them to take the place of human communication partners. They can function as extremely patient teachers, constantly eager computer-game adversaries or highly accurate doctors or nurses. Don’t be surprised when you see such systems appear and spread in e-learning, gaming and health care some time soon, because they’ve already reached the clinical testing stage.

But will we be comfortable with these ambiguous ‘beings’ around us, that are lifeless and lifelike at the same time? “It can be very hard sometimes to see the difference between a machine and a living being,” says Brigitte Krenn, an Austrian researcher of artificial intelligence. She gives a relatively low-tech example: “I know of one case where an old lady in a home for the elderly was confused by an emergency call coming from the intercom system – she thought a voice was talking to her through the television.” But it’s not just the elderly. Who can truthfully claim never to have been wrong-footed by a synthetic telephone voice, mistaking the speaker for a flesh-and-blood person? Machines are becoming more human. Krenn suggests that what is going on inside a machine should be visible on the outside – a hardware equivalent of the ‘what you see is what you get’ concept in software.

Philosophers are not afraid to raise questions that are simultaneously naive and profound. So, “Do we need all these new applications?”, wonders Jutta Weber (Technical University, Braunschweig), “especially when not everybody knows how to handle them?” She believes we need to give people a better technical education in using computers and applications first. Instead of just inventing things (and the list of innovations that never caught on is a long one), engineers should ask people what they need in their daily lives. This is not to suggest that engineers are attempting to push any old innovation down society’s throat, but that user needs and preferences should be taken into account at an early stage.

‘Biology becoming technology’ implies and promises new types of interventions which further enhance the manipulability of living organisms, including the human body and brain. It is illustrated by ‘top-down’ synthetic biology, molecular medicine, regenerative medicine, forward engineering of the brain and persuasive technology. The physical sciences (nanotechnology and information technology) are enabling progress in the life sciences, like biotechnology and cognitive sciences, creating a new set of engineering ambitions with regards to biological and cognitive processes, including human enhancement, so that in the future genes, cells, organs, and brains, can be bio-engineered in much the same way as nonliving systems, like bridges and electronic circuits.

The ‘technology becoming biology’ trend embodies a (future) increase in bio-, cogno-, and socioinspired lifelike artefacts, which will be applied in our bodies and brains, be intimately integrated into our social lives, or used in technical devices and manufacturing processes. These (anticipated) new types of interventions and artefacts present a new technological wave that is driven by NBIC convergence. It is illustrated by ‘bottom-up’ synthetic biology, the shift from repair to regenerative medicine, reverse engineering of the brain and the engineering of living artefacts. This , after future development relies heavily on so-called biomimicry or biomimetics: learning from the achievements of nature (though there’s room for improvement).

The second application of machines with human-like interactive skills is the benevolent personal supervisor. Computers that monitor how we feel (fit or tired, alert or drowsy, amused or bored) and that are capable of intervening when we fall into an undesirable state. When our bodily signals cross some predetermined threshold the computer will take action. Systems that alarm car drivers that are dozing off are already on the market, and are bound to spread to other types of travel. Bored with a computer game? The manufacturer will want to measure when that happens too.

Ambient intelligence
The third application occurs when a computer is integrated into an everyday residential environment. Here, the user interface becomes as good as imperceptible. In ambient intelligent applications, sharp sensors are built into the living space – artificial eyes, ears and noses. Early efforts have been concentrated at environments for the elderly and infirm, to enable them to lead more independent lives. Ambient intelligence is likely to figure in other areas including ‘intelligent’ homes, health care and support for the disabled, as well as industry and business. Brigitte Krenn’s misgivings about machines posing as humans are relevant again, as is Weber’s question about the appropriateness of new applications.

‘People can feel that they are losing control of information which actually belongs to them. The public trust here is very fragile’

All these smart systems raise other awkward questions. It’s important to realise that they cannot function without collecting massive amounts of personal data about the users and their thoughts and actions. An incredible amount of detailed information on the user’s actions, thoughts and emotions is now available, and the question is, to whom? Who should have access to this sensitive data? It’s certainly the sort of information that will interest many parties; knowing what people do, think and feel is the ultimate dream of any marketer, not to mention certain actors in totalitarian states.

“Our personal privacy is very much being affected,” warns legal expert Judit Sándor of the Central European University in Budapest. “People can feel that they are losing control of information which actually belongs to them. The public trust here is very fragile. We need to think about these issues on the long term.” By way of a practical solution, Michael Rader of the Institute for Technology Assessment and Systems Analysis in Karlsruhe suggests introducing a system of licensing and procedures to control the data. “We have enough experience to develop them, but at the moment we are lagging behind…” [European regulators reading this, get your yellow markers out.]

Equally awkward is what might be termed the fallibility issue. On the basis of information from sensors, these applications draw conclusions about people’s moods, deeds and needs and take action accordingly. But their conclusions and actions are only as good as their software, which in turn is so complex that it is utterly impossible for programmers to predict how it will respond to every single eventuality. What happens when the computer makes the wrong decision? When dealing with a vulnerable person, serious or even fatal harm, is a possibility. Human actors also make mistakes of course, so it might be argued that as long as the machines do no worse than we do, no ‘net harm’ is done. But who is responsible for these ‘automated mistakes’. Is it the manufacturer, or should the finger be pointed at the operator? European regulators will have to figure out what is just and practicable here.

‘Strong negative feelings among the general public are never far away, and metaphors such as ‘playing God’ and ‘Frankenstein’ —however clichéd— have lost little of their rallying power.’

The second main area of bioengineering aims at modifying existing or even building new, mostly very small, life forms. Unlike ‘traditional’ biotechnology, engineers have set their sights on creating these from scratch. In practice, there is a continuum, with species being genetically altered at one extreme and entirely new ones being crafted at the other. Reading from left to right as it were, the ‘biology-becoming-technology’ trend is well-established with the number of newly introduced, artificial components and processes increasing and the number of natural components and processes becoming fewer. At the extreme right, where we see the opposite trend, the ‘technology-becoming-biology’ process is still in its infancy.

Cellular chassis
It is believed that in the future, the young discipline of synthetic biology will use synthetic genes as tools to transform cells into biological factories or agents with a highly artificial nature, based on a so-called minimal genome as a cellular ‘chassis’. The long-term ambition is to create ‘proto-cells’ that would be self-sustaining and self-duplicating, starting from non-biological molecular building blocks. Useful features could then be grafted onto these proto-cells, or so the reasoning goes. At this point in time, however, it is extremely difficult to assess the potential of synthetic biology.

Obviously, the traditional worries about biotechnology also pertain to these developments.

Strong negative feelings among the general public are never far away, and metaphors such as ‘playing God’ and ‘Frankenstein’ – however clichéd – have lost little of their rallying power. It is not just the general public; ethicists are also struggling with the issues raised by ‘creating life’. With bio-engineering becoming ever more ambitious and possibly more potent, a new bio-debate seems in order. European politicians have a choice. Should they stimulate public debate in order to develop societal standards for living artefacts, which may result in a cautious acceptance or outright rejection of synthetic biology? Should they leave the fate of synthetic biology to market forces, hoping for more ‘under the radar’ introduction but risk a public outcry and loss of credibility later on? In a democracy, the question should be a no-brainer.

Goggling beyond Google: How will we be seeing things in the future? Photo: Gettyimages.

Other policy choices are of a more technical nature. Are the safety standards for biotechnology adequate for synthetic biology, or should new approaches be adopted? Does synthetic biology require special regulation of intellectual property rights to ensure a healthy balance of open access and protection? Should Europe stimulate the establishment of technical standards in synthetic biology, to help European players catch up with the now-dominant US?

‘Old’ biotechnology used to be about things like genetically modified crops and cloned farm animals. With bioengineering, we are now targeting our own species. The human genome was first mapped back in 2000. It is expected that within a few years, it will be possible to sequence the entire genome of any individual in a matter of days (and at well under a thousand euros, a not excessive cost). This is the current frontier for research: squeezing meaning out of the raw data that results from whole genome sequencing This is where biology is, yet again, becoming technology. Once the billions of As, Cs, Gs and Ts can be confidently interpreted, it will be possible to predict – among other things – the diseases an individual is prone to, and even to establish which treatment is best, given the rest of the person’s genetic makeup. Personalised medicine is the name of this game.

The opposite movement, from technology to (human) biology, can be observed in current transplantation practices. While artificial implants, such as heart valves, have been commonplace for decades, the future might see implants manufactured on the spot by a three-dimensional printer. Artificial blood vessels are likely early candidates for this technology.

Transplantation medicine also provides us with an illustration of the Escher-like two-way movement: after the biological surprise discovery that the cells of the human heart are capable of dividing after all, a technology was developed for cultivating new heart tissue on the basis of the patient’s own cells. The method is being tested on pigs first. Stem cell technology presents another example. These forms of regenerative medicine hold considerable promise in terms of curing diseases and lengthening human life.

But privacy is once again, a huge issue. As with the use of intelligent artefacts, which allow the storage of an immense amount of data about an individual’s thoughts, feelings and actions, sequencing genomes could leave people feeling completely exposed with nowhere to hide. The ability to interpret the genome will increase over time, so that seemingly meaningless data will reveal more and more. How do we deal with the DNA material and other personal data which will become available in the coming years?’ According to Bärbel Hüsing of the Fraunhofer Institute for Systems and Innovation Research in Karlsruhe, more guidelines and standards are needed: “With the expected increase of data exchange and internationalisation, the biomedical field needs to adapt a code of conduct on how to share these data. What level of confidentiality is needed here? How are we going to handle biobanking and personalised medicine? My personal view is that more international harmonisation of regulations by the European Union is desirable.”

While a number of specific developments were studied in the project which exemplify the major trends in four fields of bioengineering, the purpose was also to alert politicians. Despite the long-term character of the megatrends, near-term policy challenges and regulatory questions in these specific developments are already imminent.

Two questions frequently crop up in discussions about new medical technologies. The British Conservative MEP Malcolm Harbour voiced one of the most fundamental: “When we discuss the issues of prolonging human life, the question remains: how far do we want to go?” The other question is equally fundamental and unavoidable: how much is society willing to pay for health care to cover an ever longer life? When this willingness reaches its limits, how do we deal with the inequity that arises when the rich can afford treatments that the rest of us don’t have the money for?

Interventions in the brain
If you feel queasy thinking about someone manipulating your brain cells, brace yourself for what’s coming next. Your brain, the delicate organ where many of us feel our inner self sits enthroned, may not only get copied, but in some individuals, is already being regulated by technical devices. In The Blue Brain Project, technology is currently aiming to emulate biology. Although not all specialists in the field believe the idea of using computer simulations to understand cognitive functionality to be feasible or even particularly promising, the whole idea would have been inconceivable not very long ago because of sheer lack of knowledge about the brain. First, it was experiments on animals that yielded a good deal of information. And now more direct knowledge of the human brain is being gleaned thanks to diagnostic and therapeutic technologies including several types of brain imaging and stimulation.

Three of the major technologies here are Deep Brain Stimulation (DBS), Transcranial Magnetic Stimulation (TMS) and EEG neurofeedback. The aim of these technologies is (for now…) therapeutic: they are used for Parkinson’s disease, severe depression and ADHD, respectively. The use of these technologies is likely to be extended. For one thing, further therapeutic applications are being investigated, e.g. against epilepsy in the case of EEG neurofeedback. For another, it is very likely that healthy (or ‘neurotypical’) people could also benefit from neuromodulation by having their mood or cognitive performance enhanced, while EEG neurofeedback might well come to play a role in gaming.

This is where things get interesting from a regulatory perspective. The existing regulations for neuromodulation were drawn up exclusively for the medical domain, under the assumption that the devices would be operated and maintained by qualified personnel. But once new technologies get in the hands of less-qualified operators or ordinary consumers, new requirements are needed to keep users out of harm’s way. Even with trained personnel in place, there have been cases which should raise alarm signals. EEG neurofeedback has caused anxiety and insomnia; TMS can sometimes lead to hypomania, headaches and hearing loss.

‘Bioethics is ultimately toothless without biopolitics’

Once these technologies ‘get on the loose’ in society, there is an evident regulatory gap in urgent need of filling. It won’t do just to routinely declare them applicable either, because in other domains, the circumstances of use, the needs and even the risks may be different. Rather than wait for the devices to get on the market, politicians should consider the regulatory framework while these products are still under development. It’s important to note that this is not only true for medical applications of neuromodulation. All of the other technological trends described above could spread to new, unexpected fields, such as gaming, surveillance, nursing and forensics, to name but a few. Regulators are well-advised not to take a complacent or wait-and-see attitude.

The Making Perfect Life project

Research

Research has been carried out since 2009 by four member-organisations of the European Technology Assessment Group (ETAG):
• Institute of Technology Assessment (ITA), Vienna (Austria);
• Rathenau Institute, The Hague (Netherlands)(project co-ordinator);
• Fraunhofer Institute for Systems and Innovation Research, Munich (Germany);
• Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe (Germany).

What makes it special?

• By looking at interlocking bioengineering developments it brought to light deeper trends than would otherwise have been possible.
• It examines how technological developments could spread beyond their traditional fields of application rather than mapping expectations of where technology is heading and listing problematic aspects.
• It analysed how the new technological wave is challenging the existing way of governing science and technology at a European level and the governance challenges of 21st century bio-engineering.

Speaking as a vice-chairman of STOA (the Science and Technology Options Assessment unit of the European Parliament), MEP Malcolm Harbour notes: ‘The task for STOA in the European Parliament is now to disseminate the conclusions of this wide-ranging and complex study, and to focus the findings on relevant policy issues. Now that the STOA secretariat forms part of the Parliament’s wider directorate on Impact Assessment and European Added Value, this should help ensure a joined-up approach to policy evaluation and new policy initiatives.’

By weighing up the ethics of bioengineering, we can address issues for the benefit and protection of ordinary Europeans, but it’s up to politicians to actually do it. As project leader van Est puts it: “Bioethics is ultimately toothless without biopolitics.”

Share This

volTA magazine

volTA was a magazine on Science, Technology and Society in Europe, initiative of fifteen technology assessment organisations that worked together in the European PACITA project aimed at increasing the capacity and enhancing the institutional foundation for knowledge-based policy-making on issues involving science, technology and innovation. It was published between 2011 and 2015 in 8 numbers.