Category Archives: Cognitive Sciences

The Age of Artificial Intelligence, the age where intelligent machines play a central role in our society and in our economy is here. This is not science fiction or the prelude to a Hollywood movie; this will be the reality for most of us starting the later part of the twenty-first century. With search engines, bots, drones, and prototypes of self-driving cars, we can already perceive first glimpses of these agents of artificial intelligence. I do not wish to call for general alarmism but, rather, to only too briefly discuss what may be an important inflexion point in the history of human progress, as a handful before it. The coming-together, with enough industrial maturity for large-scale production, of our semi-conducting and digital technologies; our algorithmic and computational knowledge; and our mechanical technologies will usher a new age upon us. After the mastery of fire; agriculture and animal husbandry; the alphabet and abstract scripts; the forging and casting of bronze then iron; and the birth of empirical sciences and the industrial revolution, self-adapting artificial intelligence is, likely, the next driver of a great change to our way of life, our socio-political structures, and even our ethics. If this indeed turns out to be our next technological high-plateau, it will have, as those abovementioned that preceded it, many deep implications on all human societies.

Some of these implications will be on our professional endeavours and our socio-economic structures. Since the dawn of agriculture, some human beings have relied on the surplus production of others for material sufficiency. Agricultural surplus allowed for hierarchical social structures, but also for greater possibility of leisure; and from there, more time for speculation, aesthetics, knowledge development, invention, and discovery. In fact, manpower surplus has shaped social structures and traditions across the globe for millennia. Only, in the age of smart and adaptive robotics en masse, this economic way of being will be challenged, as the surplus on which we will depend shall increasingly come from technological agents. This will not happen overnight; it will be a gradual process. Nonetheless, a critical mass will be reached in the not-so-distant future, even in the services sector, and I fail to see how this will not precipitate important socio-political changes and reorganisations. Possibilities include more deflation stemming from gains in productivity (e.g. we can already take notice of numerous products and services that have become cheaper and more accessible over the past decade with digitisation and the Internet); the question of ownership of the distributed productive capacities (i.e. who will own these widely available smart robots, certain monopolies or all of us?); the problem of subsistence of low income populations, those who would be deprived from such smart agents and whose livelihood depends on providing services replaced by robots; and even, changes to the traditional ways of exchange of goods and services, including the notion of money itself. Furthermore, as it is unfortunately often the case with new technologies, intelligent robots will have direct consequences on the conduct of human warfare, as we already see with the increased usage of drones.

Other implications will be philosophical and ethical, with the increased dissociation between intelligence and consciousness on one side, and biological life on the other, even as far as the transfer and continued functioning of human consciousness and memory following biological death. In addition to the problems of ‘immortality’ it might generate, developed and self-recognising artificial intelligence will expose us fresh to ethical questions that have preoccupied some thinkers for millennia but remained fringe specialist subjects up until now. The problem of what makes personhood will become of a more general importance in society with the coming of enduring artificial consciousness and greater self-learning and self-adapting artificial intelligence. Equally, the problem of what makes someone human will emerge again with the emergence of alternative developed consciousness; and this problem will theoretically be as vivid as when Homo sapiens co-existed with its cousins of the Homo genus, such as Homo neanderthalensis, who equally had developed consciousness (not that Homo sapiens bothered then with what makes them human; they mostly cared about staying alive). This time around, we will be facing another co-existing consciousness, only of the ‘artificial’ kind. Again, these are not fanciful scenarios today but in the remit of where artificial intelligence can take us. The ethical and legal implications are evidently tremendous, and the other equivalent ‘ethical earthquake’ would be to come face-to-face with a sophisticated and conscious alien civilisation. Ironically, it is quite likely that we will create artificial consciousness before meeting any such outside civilisation.

On another aspect, the frontier between virtual and real will become blurrier with the expansion of an intelligent digital world. ‘Virtual reality’ and reality as we have envisaged it so far will be harder to distinguish. The body does not differentiate easily, especially without prior awareness of their origin, sensations triggered by virtual vs. real drivers. Furthermore, one of the main ways by which we distinguish the virtual from the real is in the prevalence of the latter. All of this is subject to change with widespread artificial intelligence. We only have to think about us living constantly in a self-adapting virtual environment initiated by our own; or think about what a robot finely imitating a baby would do to our parenting instincts. And if we have a tendency to anthropomorphise biological animals, it will indeed prove difficult for our conscious control to constantly alert us against sensations caused by a well-engineered virtual reality or human-like robots.

These are only some of the deep implications that an age with mass-scale developed and conscious intelligence will likely bring.

We live in a civilised and technological world very different from that of thousands of years ago. Civility, knowledge, and technology have been developed collectively through the efforts and hardships, and the needs and the wants, of many across times and ages. Great things have been achieved to alleviate some of the difficulties of the human condition and make us all, on average, more knowledgeable, more capable, but also more conscious. And while all has not always been for the better, and while threats of receding exist and should be recognised, the trend, even if not a smooth and steady one, has been towards greater civilisation and civility.

One of the key achievements of civilisation and technology is probably the remarkable general increase in life expectancy of humans around the world over the past few centuries, for a host of reasons, medical and other. And while this achievement is of tremendous value to us – as it would be to any living being – it does not come without new challenges. These challenges can be seen in the disconnect we increasingly face today between our biological condition and the civilisation we have created. We live much longer with civilisation and technology, and we need to live much longer to do something meaningful; all the while, some of the key characteristics of our biology did not change. Let us take few examples:

By late twenties, our cognitive processing speed is already well on the decline. We become wiser with age, and probably better decision-makers overall, but we do not have the same cognitive power as when we are young. We may also become less creative and imaginative in some areas, although the reasons behind this can be more due to longer periods of cultural conditioning than aging per se – the two are not possible to completely isolate from each other in any case.

Woman fertility declines substantially year-on-year in her thirties, and even faster in her forties, until the woman reaches menopause. Most men are technically fertile for most of their adult life, but their capacity for sexual activity also declines, some studies even claim as early as the beginning of the twenties.

And of course, physical power in humans is at its best in the late teens and early twenties, and it is on an incessant decline after that, all else being equal. As with cognition, some sports players manage to change their game as they grow older in order to last longer, but both intensity and endurance go down starting late twenties. The same goes for our motor skills and the sharpness of our senses.

We live today, on average, well beyond our physical, cognitive, sexual and sensing-capacity peaks. Life expectancy in the developed world is approaching eighty, and it has surpassed it in some countries already. And the one hundred-year mark for life expectancy is a distinct possibility in the coming two centuries. This means that we will live, on average, more than fifty years and more than two-thirds of our ‘useful’ life beyond our biological peak(s).

A world where we needed to reproduce fast and a lot, and conquer, dominate, and leave a legacy as quickly as possible before we die is no more – we have a greater leniency of time. And we need this leniency given where our civilisation and technology stand today. We need more time than centuries ago to absorb all that civilisation has developed, to learn, and to understand. And so, by the time we are done learning enough, understanding, and becoming sensible enough, we are already quite beyond our biological peak.

This disconnect creates us many practical challenges; we increasingly struggle to marry our biological condition and the civilisation and technologies we are creating. We try to remedy this disconnect by specialising (i.e. not learning everything but advancing in one particular path as quickly as possible in order to produce something new in it, while counting on others in society to do the rest); dropping school early to focus on a particular sport or modelling career, and returning to studying only after that (if at all); or looking for new medical ways of ‘going around’ our biological condition, such as freezing the eggs and finding a surrogate if the woman is too old by the time she decides to have a child. There is also another (lazier) way that is more dangerous to adopt; it is to blindly bypass in whole many aspects of civilisation, not bother understanding the achievements made so far, and become mere tools of civilisation and technology rather than conscious drivers of them.

It is likely that the disconnect between our biological condition and civilisation will increase even further with the continuous improvement in life expectancy and the continuous increase in the information and knowledge richness of the environment on which civilisation depends. We have to do something about it, no doubt. And maybe our cue comes from evolution. As we evolved to become human beings, we, and most primates for that matter, dropped biological features that may have been advantageous individually for the benefit of other features, while relying more on the community we started to live in to compensate for that. For example, our capacity to see wider angles was reduced for the benefit of much better three-dimensional vision, while counting on others in the community to spot any danger coming from angles we lost the capacity of seeing. Today, it seems reasonable that we may need to do more of such ‘outsourcing’ and sharing as we live much beyond our biological peak(s). With civilisation and technology, we increasingly rely not only on the community but also on machines and outsourced intelligence. This may raise some genuine fears of dependency and loss of control in us, but it does not seem that there is a reasonable way around it unless we start learning and understanding faster – we need only to strongly mitigate any possible risks.

As to another corollary, it is quite likely that greatness going forward becomes even more disconnected from biological peak(s). The greats of tomorrow may be very different from the greats of the past, and collective greatness may become entirely dominant over individual greatness.

There are many important concepts that we need to hold in order to make sense of the world around us; we give specific names to these concepts and we engage in important, deep discussions about them. These names refer in many cases to abstract concepts, and they serve as much needed ‘gap-fillers’ in our prevalent wording, reasoning, interpretation, and understanding of the world around us. These names therefore fill a pragmatic function in the context of us talking about, understanding, and consequently acting on the world around us.

But given the abstract nature of the concepts that hide behind the names, a bit of examination of the meaning these names hold with different people show a great diversity of opinions and views. It is as if everybody agrees to use the same conceptual names while most in reality disagree on the exact meaning of these names. The origins of these conceptual words are many times uncertain, and often, the current meanings of these words are a far cry from the original historical intentions for them–in other words, the meanings behind conceptual words do evolve, and this evolution largely explains the differences in the actual thoughts behind the words (or we could say the differences in noema). Some conceptual names can continue to be culturally transmitted over long periods of time. Such names become a cultural reality, a lingual and cultural tool, more so than the actual concepts behind them as first intended. Let us be specific: Causality, God, Free Will, Mind, Essence, are all examples of such important conceptual words that are very widely used and ‘believed in’ but with great on-going disagreement on their details.

With the advance of our knowledge in what concerns many of the details that hide behind such conceptual words, we continuously discover how shallow our definitions and uses of these words have been, and how unimportant and irrelevant some of these words become with our new state of knowledge. These names become cultural and historical artefacts, which we may continue to employ for convenience, but without substantial belief in them and no real epistemological value. The word Essence is an example of such; this word has had great philosophical and religious value for many centuries. Essence was an important concept in Greek philosophy and was perpetuated under different forms through the scholastic period and up until the early modern times. Saint Thomas Aquinas was for example particularly pre-occupied with the problem of essence in what concerns cannibalism. Today, the word Essence has ceased to occupy any serious position in modern philosophy. What is the reason behind? Our advances in the knowledge of the details of the world around us make of this word an unnecessary gap-filler to maintain. Almost all the key constituents of the word Essence, across its different possible meanings, were stripped from it with time and attached to other words that became more culturally predominant; in the process, the word Essence became void of any special meaning from an epistemological point of view. This dynamic is also happening with Causality, God, Free Will, and Mind. Many of what was encompassed by these words is being stripped out and given a more solid footing in other new concepts, both scientifically and culturally. The importance and the meaning of particular concepts change with the evolution of our complicated web of knowledge.

The unfortunate part is that many still refuse to admit the historical fact of this evolution of important abstract words and concepts. As they hold to old and obsolete views of the world, refusing knowledge and refined understanding, they continue to cling to these historical artefacts as if they were immutable and deified concepts–they try in any way possible to keep these words alive and relevant. There is no harm in continuing to use old conceptual words, as long as one is clearly aware of the actual epistemological value behind and not succumb to the illusion of some actual mystical reality that has no convincing basis. If I use the word Zeus in a fictional manner, it does not mean that I believe in Zeus or that the name and concept of Zeus are essential to my understanding of the world around me or for the validity of my knowledge system. It is the same for many old conceptual words.

When we attempt to render human cognitive abilities special, or try to shed a light on what makes our mental capabilities different or ‘better’ than other cognitive capabilities around us (be it in other animals or in artificial machines), we can think of many elements: a higher level of consciousness, a developed memory, or advanced analytical and logical capabilities and with wide scope. Yet, it is the power of our mental shortcuts that is of crucial importance and is often omitted. These mental shortcuts have been given many names: intuition, heuristics, problem solving tools etc. In fact, what still makes humans capable of producing some things that artificial machines are not able to achieve today is not necessarily due to their ‘intelligence’ or their ‘memory’; it is rather due to their ability of approaching mental data efficiently through cognitive shortcuts, their ability to translate problems into equations quickly and efficiently (equations that machines are more capable of solving than us), and their ability to represent cognitively a wide variety of things they experience .

This does not mean that our mental shortcuts are always right (many shortcuts do lead to errors in judgment and behaviour in some situations). And the origins of these shortcuts are partly instinctual, partly developed with age, and largely altered by experience and by the environment. This makes of mental shortcuts a difficult subject of understanding and its scope a very wide one. Mental shortcuts are key constituents of what we call cognitive models, which much of our knowledge and conception of Reality and Existence depend upon.

Historically, Henri Bergson envisaged two types of intelligence: one analytical, which operates by reducing a problem into smaller pieces, analysing each piece, and going by conjunction, and another more intuitive intelligence, inscribed in duration (la durée réelle ou la durée créatrice), where everything is considered as a flow and not as a sum of parts as in the case of analytical intelligence. Curiously, Bergson saw analytical intelligence as a hallmark of the human intelligence (alongside intuitive intelligence that we share with animals) and made it the cause of many of our fallacies and weaknesses of understanding. In reality, our recent knowledge in the field of cognitive sciences actually points to the contrary: we have very powerful mental shortcuts, very powerful heuristics; we approach our experiences in a nimble and short-circuited manner more so than in an analytical one. In our mental shortcuts lies a good deal of our greatness but also our faults.