1 Noosphere Epistemology
13.02.2010 19:08:04
- A unified description framework for evolutionary and social epistemology - by Stefan Pistorius, Zirndorf -
ABSTRACT According to Pierre Teilhard de Chardin, nature's latest mainstream of evolution is the ‘noosphere’, the global sphere of human thought. We interpret Teilhard’s holistic view on the noosphere as an evolving network of knowledge. To model the network, we introduce the concept of interactive adaptive Turing machines (IATM). Based on t

1 Noosphere Epistemology

13.02.2010 19:08:04

- A unified description framework for evolutionary and social epistemology - by Stefan Pistorius, Zirndorf -

ABSTRACT According to Pierre Teilhard de Chardin, nature's latest mainstream of evolution is the ‘noosphere’, the global sphere of human thought. We interpret Teilhard’s holistic view on the noosphere as an evolving network of knowledge. To model the network, we introduce the concept of interactive adaptive Turing machines (IATM). Based on the IATM model, we define epistemic concepts like ‘factual’ and ‘transformational’ knowledge, 'knowledge domains', and the 'propagation' and 'evolution' of knowledge. The model takes into account both people and their cognitive technical equipment. So the noosphere network embraces the Internet and all humans. It is complex, dynamic, and adaptive. According to the model, the ontogeny of an individual’s knowledge follows rules analogous to the phylogeny of knowledge domains and the overall noosphere. Thus, the model is a first step towards a unified evolutionary and social epistemology. Moreover, the network view of knowledge allows us to derive new epistemic insights from observations of the evolving Internet and from complex network research. In the light of the model, we finally discuss Teilhard's vision of the noosphere converging towards the so-called Omega Point. 1 Introduction

Pierre Teilhard de Chardin (born 1 Mai 1881 near Clermont-Ferrand; died 10 April 1955 in New York) was a French Jesuit, palaeontologist, anthropologist and philosopher. In his work he tried to reconcile science and religious faith. In his two most important works ‘The Phenomenon of Man’ (orig.: Le Phénomène Humain, 1955)1 and ‘The Appearance of Man’ (orig.: La Place de l'Homme dans la Nature, 1956)2 he describes the evolution of the universe from its beginning to the formation of the planets and the evolution of the biosphere. With the dawn of humankind a new sphere evolves, the noosphere, the sphere of thought. Now the evolution of the noosphere is the most important thread of evolution. In its first phase, it expands, conquers the globe, and diversifies into a multitude of different cultures that evolve, disappear and cross-fertilize each other. In its second phase, which according to Teilhard, has just begun, the noosphere is in a state of accelerated convergence. Now the spiritual forces strive for unification. At the end of this phase, in a few million years, at the Omega Point, humankind could be united in a collective consciousness, based on a harmonised world view3. Teilhard was convinced that humans would find ways to bring their brains to perfection. Between 1948 and 1950, he wrote, ‘I am thinking of the amazing performance of electronic machines (the results and the great hope of the aspiring ‘Cybernetics’). These devices replace and multiply the computing and inference capabilities of the human mind by such ingenious methods and to such an extent that in this direction we can expect an equally great increase in our abilities as it has brought the evolution of our vision.’4

Our approach The central idea of this article is to describe the knowledge evolution of all humans and their cognitive technical equipment by means of a dynamic adaptive network model. With this approach to epistemology, we pursue the following objectives: • The algorithmic foundation of the model leads to well-defined epistemic concepts. • The model is a powerful description framework, which embraces both individual and super-individual knowledge evolution. Therefore, we see it as a first step towards a unified evolutionary and social epistemology. • Besides its descriptive power, the network view of knowledge reveals new aspects about the evolution of knowledge derived from complex network research and observations about the impacts of the Internet. Thus, we can discuss Teilhard’s hypothesis of a ‘convergent’ noosphere. We proceed as follows: the main part of this essay (Sections 1 - 4) is dedicated to a semiformal step-by-step introduction of the dynamic network model and all necessary concepts. To motivate the central definitions we present a detailed example. In Section 2, we define ‘interactive adaptive Turing machines’ (IATM) which is the computational model for a dynamic adaptive network of interacting intelligent agents. It is similar to a model first introduced by Jan van Leeuwen and Jirii Wiedermann5 and represents an interactive version of the classical ‘Universal Turing machine’. From the computational model, we can derive precise definitions of important epistemic concepts. First, we introduce our notions of ‘factual’ and ‘transformational’ knowledge. In Section 3, we apply these definitions to model the network of knowledge of a single agent, which we call her/his/its 'world view'. In Section 4, we look at networks of interacting agents. A group of interacting agents may constitute a particular field of knowledge, which we call 'knowledge domain'. The knowledge network of all agents constitutes the overall noosphere. On each level of granularity, from a single agent's network of knowledge to super-individual knowledge domains and the global noosphere, knowledge evolution follows similar rules. In the second part (Sections 5 - 7) of this article, we indicate how to apply the model and discuss some initial results. In Section 5, we discuss how to integrate existing approaches to evolutionary and social epistemology. In Section 6, we discuss epistemic consequences derived from the evolution in information technology especially the Internet and results of what is known as scale-free network research. In Section 7, we reconsider Teilhard's Omega Point theory. 2 Standard Turing machines and interactive adaptive Turing machines

The English mathematician Alan Turing provided an influential formalisation of the concept of algorithm and computation by the so-called Turing machine. In our context an informal description will suffice: Definition 1: Standard Turing Machines (TM) A Standard Turing machine (see Fig. 1) is a device with a finite control and an unbounded amount of read/write tape-memory. The finite control can assume a finite number of states. For each non-halting state and the actual input symbol (out of a finite alphabet Σ) on the tape, there is a rule which tells the read/write head what to do next. It reads its input symbol from the tape and then it might move one field to the left or to the right, write a symbol (out of Σ) onto the tape and assume a new state. We say the TM accepts an input string (must be finite) if it starts at the beginning of the input string and halts after a finite number of steps in a halting state. The new, rewritten string on the tape is called the output string.
5

On an even more abstract level, every TM is a device that reads a finite input, does some ‘computations’, and produces an output. For the abstract (mathematical) concept of the TM it makes no difference, whether the step-by-step routines (i.e. algorithms) are done by a ticket machine, a computer or a human. Turing machines are only one of many ways to describe the mathematical class of what are known as µ-recursive functions6.
Turing machine Read/Write Head Endless Tape

Si+1

Si+2

Si+3

Si+4

Si+6

Si+7

Si+8

Si+9

Finite Control • finite set of states • finite set of rules

Si+j ε Σ finite Alphabet

Fig. 1

To avoid a misunderstanding, we have to emphasise that in our epistemological context we do not intend to model the human brain as in the early theories of Machine State Functionalism within the philosophy of mind7. The idea is (see Chapter 3) to use a model of computation for the definition of knowledge independent of the know-how carrier. It can thus be attributed to both humans and their 'intelligent devices'. From the theory of TMs we can derive a result that is important to the subsequent discussion of knowledge and truth: Theorem 1: The following problem concerning Turing machines is unsolvable. Given a Turing machine M and an input string w, does M halt on w?8 So far, we have only talked about algorithms and the notion of a single Turing machine that starts with a fixed input. In order to model an evolving network of intelligent, interacting agents four new ingredients need to be added to the model of computation: • • • • interaction of agents, infinity of operation, persistent memory and non-uniformity of programs.

Theorem 2: For every finite set S of IATMs there exists a single IATM M that sequentially implements the same computation as S does14. In other words, theorem 2 tells us that a whole network or parts of a network of interacting Turing machines can be regarded as a unity because a single (more complex) machine can simulate it. Subsequently we will model the knowledge of a single human or a single computer by a network of IATMs. Thus, the knowledge network of all interacting humans and their computers is a network of networks. However, even the overall network of all agents (i.e. human or cognitive equipment) can still be seen as a unity, which we will call the noosphere. The noosphere as a whole interacts with nature. If the interaction between an IATM M and its environment (i.e. other IATMs or nature) leads to some kind of disruptions (see Section 4) then M or the environment might 'mutate' and sometimes successfully adapt its algorithm, or the interaction might continually be disturbed. To mutate, the IATM rewrites parts of its persistent memory containing the algorithmic rules. In other cases both may get an ’upgrade’ or ‘adaptation-message’, from a third party IATM interacting with both in order to rewrite/adapt the algorithm. If for instance M1 and M2 were computers that exchange erroneous business data, then M3 could be a human or a computer in the Internet, that provides for the upgrade of M1 and/or M2. ‘Adaptation messages’ are highly effective (far more effective than rare and undirected 'mutations') and can be regarded as a kind of learning from others. 3 Knowledge and individual world views

In order to motivate the notion of knowledge and knowledge evolution we first present a typical example of a business process where a group of intelligent agents work together to settle an invoice. The process will be modelled by IATMs in a semi-formal way and a definition of factual and transformational knowledge of IATMs will be given. In section 3.2 we elaborate the kind of knowledge involved and show how to apply the formal definition. We demonstrate that it can be applied to humans as well as computers. 3.1 Example and definition of knowledge

Example: Company CC orders 10 new PCs for their staff from its supplier for business computers BC. After a few days, CC’s in-box agent receives the invoice from BC. To settle the invoice, several steps have to be performed. We model the process by interacting IATMs. It should be mentioned that the design of the workflow between the different IATMs is just one solution out of many others that would be possible. For further discussions we concentrate on the content of the memory (PROP) and on the rules (TRANS) needed. • IATM-1 is an OCR (Optical Character Recognition) device that scans pieces of paper and delivers a sequence of characters. In order to fulfil the task it has to ‘know’ the following: o PROP: in its internal memory there has to be a pattern for each of the possible letters. o TRANS: the rules of the IATM specify how to read the pixels on the paper, and how to match them against the appropriate letter pattern. If successfully matched, IATM-1 outputs the respective letter. In other words, IATM-1 translates a stream of pixels into a stream of letters.

IATM-2, the invoice registrar, is a device, which takes a stream of letters as its input and transforms them into meaningful pieces of information like ‘name of supplier is BC’, ‘The address of BC is ... ‘, ‘BCs account-number is ...’ and so on. If it has found every needed piece of information, it sends all data and a statement ‘The invoice is complete’ to IATM3. To do so it might make use of the following: o PROP: In its internal memory, it might have collected all relevant master data about the already known suppliers. This information could be helpful if the invoice could not be completely read by IATM-1 due to bad quality of the printing. Moreover, in its memory it might have information about the typical structure of the invoices of all already known suppliers. o TRANS: The rules of IATM-2 compose letters into words and implement heuristics such as the suppliers-names can mostly be found at the top of the page, item prices on the right, account numbers at the bottom and so on. For a particular supplier the memorised information about its typical invoice structure might help to improve the results. If the actual invoice differs in structure, the memory will be adapted. • IATM-3, the accountant gets a stream of data about invoices and, at the end of each invoice, the information whether the invoice is complete. It computes whether the summation is correct. It then consults IATM-4/5 (i.e. the purchasing department, the ITdepartment) for performance acknowledgement and waits for positive feedback. If everything is all right, the accountant releases a transfer order to the bank. o PROP: In its memory, the IATM-3 has information about the prices agreed upon with the different suppliers. Another IATM that is responsible for supplier contracts adapts this information regularly. o TRANS: The rules define the steps to check the invoice for computational errors and whom to consult for performance acknowledgement. Then the transfer order has to be generated. We omit the further steps by IATM-4/5. In order to simplify the example, we didn't describe those cases when the invoice was incorrect. Therefore, in reality the interaction of the IATMs might be much more intensive. Now we focus on the kind of knowledge needed to run the process. Definition 3: Factual knowledge of an IATM at time t The factual knowledge of an IATM M at time t resides in its memory and may consist of the following: • data patterns (concepts or sensorial patterns) needed to process and memorise input. • propositions (either received from other IATMs or derived from M's own algorithm). Definition 4: Transformational knowledge of an IATM at time t The transformational knowledge of an IATM at time t is its algorithm residing in its memory at t used to analyse input and to derive new messages. Expressed in short terms Definition 3 means knowledge-that (derived from knowledge-how) and Definition 4 means knowledge-how. In our context, knowledge-how is the kind of knowledge needed to derive new factual knowledge from already existing knowledge or input from nature. Factual knowledge can be everything from basic concepts, simple propositions about observations up to propositions in scientific theories. The difference between factual knowledge and ‘information’ is that the former has to be processed and accepted by an IATM, whereas information does not need to be processed by anything. However, definitions 3 and 4 also differ fundamentally from the classical definition of knowledge as ‘justified true belief'15, because the definition is purely mathematical and independent of any knowledge
15

carrier and hence there is no ‘belief’ and ‘justification’ and there is no absolute 'truth', as we will argue in Section 4. In general, the definitions abstract from any concrete human mental states, motivations or cognitive mechanisms. There is no ‘believer’ as long as we don’t apply the knowledge definition to humans. If we apply it to humans, then factual knowledge may be propositions that could be justified belief. Section 3.2 discusses in more detail how knowledge can be attributed to agents but the knowledge core is an abstract notion independent of the knowledge holder. This independence is essential for further discussion in Section 6 because we argue that Teilhard's noosphere is composed of humans and their technical cognitive equipment, namely the Internet and everything that is used to produce factual knowledge. 3.2 Individual Worldviews of agents

accountant and the others. In addition, if we consider all the other cognitive capabilities each human possesses, we must say that a whole network of hundreds or thousands or even millions of interacting IATMs could be necessary to describe a single human's formalised conceptual and non-conceptual, factual and transformational knowledge17. Fig. 2 visualises the network of an agent's (human or computer!) knowledge. It is arranged in different layers. Each layer represents an IATM, the processing rules on the left and the content of its memory at the right. Instead of four layers, it could also be many more or fewer. In the example, we had only one layer of non-conceptual processing, the OCR device. It all depends on how we model the intermediate steps. Layer 2 IATM of Fig 2 interacts with Layer 1 IATM and uses its memory content to produce simple propositions, which it outputs to Layer 3 IATM and possibly to its own memory. Note that in Fig. 2 the term ‘environment’ means the environment of an agent as a whole. For the different IATMs involved, anything that is outside their own layer is their environment.

Fig. 2
The knowledge units in an agent's memory are related to each other in various ways. As we demonstrated, all of an agent's knowledge can be regarded as a network of many interactive adaptive Turing machines. Altogether, they account for her/his/its view of the world, the world view. For further discussions we need Definition 5: The world view of an agent at time t is the network of IATMs representing her/his/its conceptual and non-conceptual transformational and factual knowledge at time t (see Fig. 2).

The main purpose of interaction is the exchange of messages. Since messages can be knowledge, interaction processes may lead to knowledge propagation. We talk about knowledge propagation if one IATM outputs a message and an other accepts and memorises it as input. If the interaction process is disrupted and one party or both parties adapt their transformational knowledge, we talk about knowledge evolution. In order to motivate the model of knowledge evolution we will elaborate possible operational faults between interacting IATMs and their strategies to ‘settle their differences’. We will concentrate on the interaction between two IATMs. From theorem 2 one can derive that this will be enough to describe the ‘settling’ process for a whole network. The argument is as follows: If we want to study the interaction processes in a set S of interacting IATMs we may begin with any IATM M element of S, then simulate S - {M} by a new IATM S' (possible because of theorem 2) and then we study the interaction of M and S'. At first sight, this doesn't seem very reasonable for real life situations, but it is! Let us look at an example: If M is human and exchanges intelligent messages (eMails, chats, whatever) with her/his intelligent friends A and B via the Internet by means of her/his computer C, all he needs is C! M only interacts with C's keyboard (input) and screen (output) and nothing else! Nevertheless, we know that in reality, M exchanges intelligent messages with her/his friends A and B, and C is only a kind of interface. However, A and B are intelligent and they produce their well-considered messages by some well-designed cognitive processes. If this is the case, then C can simulate A and B by means of a software that implements the welldesigned cognitive processes of A and B. If the software is good enough and passes the Turing test18 then H doesn't even notice the fraud. The point is, that C with its new software, let us call it C', can simulate the network of A, B and C. But what if A and B are sitting next to M in her/his living-room? - The answer is: with respect to the message exchange process it doesn't make a difference. The contact between M and her/his friends A and B is again via an interface, her/his sensory, probably most of all her/his ears and her/his eyes. The part of the well-considered message exchange is as before. A background computer could simulate their messages. The more difficult part is the nonverbal communication. We only have a chance if this can also be formally modelled. Until now this is science fiction, some sort of a perfect virtual reality, as in the movie ‘The Matrix’. We do not have to discuss whether this is possible. The point is if it can be formally modelled then it can be modelled by a single IATM. In order to understand what kind of disruptions might happen between two interacting IATM's we extend the example of Section 3.1, where a sequence of interacting IATMs settle an invoice. In contrast to the original version of the example we assume that the process may be disrupted by some errors. We distinguish the following: a) Erroneous knowledge of an IATM The invoice registrar may have a bug in its rules or in its factual knowledge base such that it occasionally states for some invoices that they are complete although they are obviously not. b) Knowledge of sender is contradictory to knowledge of receiver In this case, the registrar works as intended, produces the correct English statement but its output is still not accepted by the accountant. The reason is, that the accountant needs some additional information about the invoice, for instance the VAT, a piece of information that the
18

registrar does not analyse because it is not part of its transformational knowledge. Because of this, for the accountant the registrars' output, ‘The invoice is complete’ is false, although it is justified true belief (i.e. factual knowledge) of the registrar. The reason is that both have a different concept about an ‘invoice’. Therefore, the registrar's (transformational and factual) knowledge contradicts the factual and the transformational knowledge of the accountant. c) Output alphabet of sender does not match the input alphabet of receiver The OCR device may have problems reading the letters because BC's invoice is written in Chinese and it does not have any pattern for Chinese symbols. Consequently, it does not produce an output the invoice registrar can cope with. In concrete networks, there are many more possible sources of disruptions resulting from interaction. For instance, synchronisation problems or message routing problems with loss of messages and so on are difficult problems in real world networks. We can abstract from these, because they are not essential for the discussion of knowledge propagation and evolution. To dissolve the disruptions, the IATMs have to adapt. Such adaptations can principally take place after each message read from the input stream. They change the computational behaviour of an IATM and possibly its memory content. We interpret such adaptations as evolving knowledge. Based on the described types of disruptions, we will analyse what kind of adaptations we need, or, in other words, how the knowledge evolution works: a) Adaptation of a single IATM If the invoice registrar operates with buggy rules, then the rules have to be adapted. The nature of the adaptation depends, of course, on the problem. The interesting questions are how can we avoid such bugs and how can we be sure that the adaptation is a correct solution to the problem. The answer is: for theoretical and practical reasons, we can never be sure that in a dynamically changing environment an IATM works as required. Because this argument is essential for further discussions, we will elaborate on it. If we first look at the problem from a theoretical point of view, we have to be precise about what it means to prove that an 'IATM M works as required'. Since M could be adapted any time, we assume that M, beginning at time t, consumes only one message (i.e. a foreseeable input string of finite length). By this assumption, we look at M as if it were a Standard Turing machine (see Definition 1) for a while. Then we need a formal specification of the expected behaviour of M and a proof that M performs accordingly. Unfortunately, theorem 1 tells us that we can't even be sure that M will ever halt on the input message. All we could prove is the so-called partial correctness of M at time t19. Since we are interested in M's performance in the context of its environment, we have to make assumptions about the environment too. Because if we do not care for M's environment, then it could happen that the interaction does not work anymore, because the environment has changed the interaction rules without prior notice and thus it could send an unacceptable message. But if we want to be sure about the behaviour of the environment, we need also a prove of the partial correctness of the environment at t20. Even if we did the cumbersome work of formally specifying the required transformational behaviour at t and prove the partial correctness of both the IATM and its environment at t it could only be useful for a period without changes, between t and t + x. The question is what we expect of M if something in the interaction process changes. We probably expect M or the environment to adapt somehow. However, a priori we do not know
19

needed to agree on a common invoice concept. In the context of their families, which are not part of their ‘invoice knowledge domain’ the invoice concept was irrelevant and no changes took place. Therefore, we say the knowledge domain specifies the requirements for transformational and factual knowledge at time t and decides on the correctness or rather the adequacy of it. Knowledge changes may have serious and costly consequences. Every change of a fundamental concept of an IATM can affect a wide range of IATMs of the interacting environment. The worst-case scenario is that the change propagates errors through the whole network of interacting IATMs. Some of the errors might emerge immediately, because an interacting IATM does not accept its input anymore, since it does not know anything about the intended change. Or it accepts the input somehow but it produces obviously false output. Other errors might not be detected for a long time, and when they emerge, other changes might have taken place in between and it takes a lot of time to find the cause. The questions is: can such situations be avoided? The cheerless answer is once again, a lot can be done to minimise negative effects but, for theoretical and practical reasons, we can never be sure. c) Adaption of syntax: With respect to knowledge evolution, this case is a special case of b). But the required changes only affect the syntax of input and output of the respective IATMs involved. Therefore, we don't have to worry about a possible chain reaction as in case b). If the OCR of company CC can't cope with Chinese then it might ignore the input and wait for an English written invoice of BC. And if BC does not cooperate, the purchasing department could decide to change the supplier. - This strategy works as long as there are enough alternatives or as long as the interaction is only of occasional nature. In the example, it probably depends on the market power of both parties. If BC has an unchallenged supremacy in the market, CC will have to adapt. As soon as the interaction gets more intense, one of the parties has to adapt its transformational knowledge. In this case, either BC will learn English or CC will learn Chinese or they will agree on a third language. We summarise the results of this section by the following statements about the class of interactive adaptive Turing machines: S1) In an unforeseeable, changing environment, there is no adequate a priori correctness criterion for transformational and factual knowledge. The agents belonging to a knowledge domain specify the requirements for transformational and factual knowledge at time t and decide on the adequacy of it. Changes of transformational knowledge are triggered by contradictions within a knowledge domain and contradictions arising from interaction with the environment. By means of ‘adaptation’ or an 'adaptation-message' from others the agents of a knowledge domain adapt their knowledge to new requirements. In short: Knowledge evolution is a process of trial and adaptation on error. The more intense the interaction between intelligent agents is, the more likely it is that contradictions will emerge and the higher the pressure to resolve the contradictions will be. The resolution can contribute to harmonized world views of agents. There are two options for resolving contradictions between two IATMs. Either one will win recognition or both agree to resolve the contradictions by a consistent unification of their transformational and factual knowledge. If the IATMs belong to different knowledge domains, this may lead to a unification of the domains. Changes of fundamental concepts can have far-reaching and costly consequences.

These statements cannot be proven within the IATM model, because the normative aspects of the ‘need’ or the ‘pressure’ to solve contradictions is not formalised within the model. Nevertheless, in Section 6, we will present some real world observations of phenomena of knowledge evolution that support S1) to S5). Based on the definitions of the previous Sections, we can finally define the 'noosphere'. Definition 7 Noosphere: The noosphere is the evolving global network of the world views of all interacting humans and their cognitive devices. Fig. 3 visualises a network of 4 interacting agents, i.e. a small network of networks. Each agent's network of knowledge (see also Fig. 2) consists of knowledge belonging to different knowledge domains (KD1 - KD5) and of a network of non-conceptual knowledge. In the evolution of each domain, there may be many agents involved. We model this by interaction of the agents (symbolised by arrows). Agents belonging to the same knowledge domain may still have different world views, and this may have a significant influence on the knowledge domain. The influence is twofold. First, it arises from the non-conceptual layers of knowledge. If the sensory of two agents provides for different experiences, it might influence their attitude towards some knowledge domains. Second there are of course influences from the other knowledge domains that the agent belongs to. The interaction between agents or between an agent and nature leads to knowledge propagation and in case of disruptions to knowledge evolution23 (see rule S2).
Knowledge Domains and worldviews Agent 1 KD 1 KD 2 KD 3 Agent 2 KD 1 KD 2 KD 4 Agent 3 KD 2 KD 3 KD 4 Agent 4 KD 3 KD 4 KD 5

With the definitions and theoretical results of the previous Sections at hand, we can now reassess the adequacy of the model. In Section 5, we demonstrate that the theory provided can be seen as a framework for different branches of evolutionary and social epistemology. 5 The noosphere framework for evolutionary and social epistemology

13.02.2010 19:08:04
aspects – is a knowledge process, and that the natural selection-paradigm for such knowledge increments can be generalized to other epistemic activities, such as thought, learning, and science. … of all the analytically coherent epistemologies possible, we are interested in those, (or that one), compatible with the description of man and of the world provided by contemporary science'24.

We think that our network model of knowledge evolution for both individuals and knowledge societies provides a formal description framework for such an epistemology. The model specifies important epistemic concepts like 'factual' and 'transformational' knowledge, individual 'world views', super-individual 'knowledge domains', the overall 'noosphere' and the 'propagation' and 'evolution' of knowledge. What remains to be done, is to integrate those naturalistic theories that explain the causes promoting the propagation and evolution of knowledge. To give an example, we indicate a possibility of how to integrate Gerhard Vollmer's naturalistic model of evolutionary epistemology of cognitive mechanisms25 and Karl Popper's and/or Philip Kitcher's approach to the evolution of super-individual scientific theories. In section 3.2, we modelled the different layers of an agent's transformational and factual knowledge (see Fig. 2). The layer model resembles and can be interpreted as a formalisation of Gerhard Vollmer's hierarchical structure of human knowledge at hand. He calls the layers 'sensation', 'perception', 'experience' and several layers of 'scientific knowledge' (see VOLLMER, Gerhard (2003), Band 1, p.33 or p. 89). By his 'projective model' Vollmer describes and explains how human's cognition reconstructs (i.e. transforms) sensation to perception, perception to experience and finally experience to scientific knowledge. Moreover, Vollmer's philosophy describes and explains the 'fit' of epistemological mechanisms to the 'mesocosmic' world. His naturalistic approach refers to biological evolutionary theory, physics, and cognitive sciences. From these and other considerations, he derives his 'hypothetical realism'. The evolutionary mutation and selection processes can principally be modelled by IATM's that represent so-called evolutionary or genetic algorithms26. The transformational step from sensation to perception can also be described as an IATM. The steps from perception to experience and from experience to scientific knowledge are of a different nature. Vollmer does not describe the interactive processes within scientific communities or influences from others outside the community that lead to the evolution of scientific theories. Nor does he describe the influence of a scientist's world view on scientific theory building. According to Vollmer's definition, evolutionary epistemology does not describe and explain the evolution of human knowledge, but only the evolution of cognitive mechanisms27. We think that Donald T. Campbell's integrating approach to evolutionary epistemology (as cited above) leads to an even deeper understanding of human knowledge. Using the dynamic network model, it is obvious how to incorporate the interaction processes within a scientific community as well as the influences from outside the domain or from a scientist's own world view (see Fig. 3). We can describe how an individual learns or is influenced from others and how personal experience can lead to disruptions and subsequently to knowledge evolution. The disruptions could contribute to the evolution of scientific theories. Karl Popper's conjectures and refutations approach to evolutionary epistemology of theories28 addresses some of these aspects. According to Popper (as well as to our framework), there is no absolute truth. Every scientific theory (i.e. a 'knowledge domain') can only be valued as
24 25

a ‘conjecture’. A good theory must be falsifiable and such it is possible, that new facts, i.e. messages from the environment, refute the theory. Then the existing theory or part of it has to be adapted. Therefore, genuine science (in contrast to metaphysics) should be seen as a progressive evolutionary process, i.e. a converging knowledge domain. Philip Kitcher reflects in more detail the 'division of cognitive labour' within a scientific community i.e. the message exchange processes within the knowledge domain network29. Moreover, he describes and explains the 'consensus practice' within scientific communities and he stresses the influences of individual beliefs, i.e. the 'agents world views' according to our terminology (see also our example in Sections 3 and 4).30 Within this article, we can only adumbrate the idea of how to integrate existing evolutionary and social epistemology approaches within the noosphere framework. Bradie, and Harms’ article31 gives a good overview and classification of evolutionary epistemology approaches and Goldmann’s article32 for an overview of social epistemologies, some of which are integration candidates. The model is flexible and powerful enough to integrate different individual and super-individual (e.g. social and cultural) naturalistic theories of knowledge evolution. The challenge of a unified naturalistic epistemology is to put the right pieces together and describe them within the framework provided. A unified theory should at least describe and explain the mutual influences and the promoters of individual and superindividual knowledge evolution. Moreover, it should integrate the technical aspects of knowledge propagation and evolution. In the following Section, we will demonstrate that, besides its descriptive power, the adaptive network model of knowledge can also explain knowledge evolution phenomena derived from complex network research. 6 Noosphere Epistemology

So far, the network model of knowledge has served as a basis for formal definitions of central epistemic concepts and as a description framework for existing epistemologies. We now describe and explain some of the revolutionary changes in information technology. From our point of view, future epistemology should embrace the ongoing revolution of information technology, because it fundamentally changes the ways knowledge is propagated, processed, represented and developed. Moreover, it changes the division of cognitive labour between humans and machines and it changes the way we think33. With this article, we unfold a (non-formal) perspective on the subject, which we call noosphere epistemology. We would like to find answers to the following questions: • • • • • What are the characteristics of the evolution of the noosphere since the emergence of the Internet and the World Wide Web? Can we expect new sources of knowledge? Is there evidence of a convergent knowledge evolution as Teilhard postulated? Since the noosphere is modelled as a network, what can we learn from complex network research? According to the theoretical model, there is no principal difference between individual knowledge and knowledge corpora. Are there empirical indications that the demarcation between an individual's knowledge and her/his/its environment blurs?

13.02.2010 19:08:04
The degree of knowledge propagation as a measure of noosphere evolution

From S1 – S5 we can conclude that as long as there is intense, and contradiction-free interaction, knowledge is accepted as adequate (i.e. truth criterion) and can propagate. From this we can derive that everything that helps to support the propagation and successful adaption of knowledge among the millions and billions of agents contributes to the evolution of the noosphere towards ‘relative truth’. In section 6.2 we study knowledge evolution trends due to information technology (IT) especially the revolution of the Internet and its contribution to the propagation and evolution of knowledge. We argue that the impact on the rest of the noosphere is enormous, although in December 2009 only about 26% of the world's population had access to the Internet34. This article mentions just a few aspects of the evolution; most of the analysis must be left to future work. 6.2 Evolution of information technology as a catalyst for noosphere evolution

Evolution of the Internet and World Wide Web as a breakthrough for the propagation of knowledge The invention of the Internet and World Wide Web brought a new infrastructure for the propagation of information. But propagation of information does not necessarily mean propagation of conceptual knowledge. Only if there is an agent that is able to ‘understand’ the information on the Web page can we say that conceptual knowledge propagates. If a PC's browser program processes a Web page it does not 'know' anything about the conceptual content of the page. The non-conceptual transformational and factual knowledge of the browser is only about how to read a sequence of HTML tagged letters and pictures and how to display them in a colourful way on the screen. The human interacting with the browser program may be able (or not) to understand and accept the conceptual content and generate new knowledge out of the Webpage. After all, the W W W in its first phase, now called Web 1.0, brings a much better and faster access to conceptual knowledge for millions of people. More and more people have the chance to get to know new knowledge domains they did not know before. This may cause a significant change in those peoples world view. Web 1.0 does not provide many possibilities for human agents to give feedback to Web content. There is only the choice to accept the content or not. Since the emergence of socalled Web 2.0 technologies, new feed back and collaboration concepts have been developing and therefore, the evolution of knowledge domains accelerates once again. Due to Wikipedia and the so-called social networks, people around the world now have the chance to share the same cultural and scientific knowledge domains and to build new domains. New virtual organisations evolve and enable people to collaborate on a worldwide scale. So far, we can assert that the evolution of the Internet and the World Wide Web improve the communication and global growth of conceptual, cultural and scientific knowledge domains and therefore it accelerates the evolution of these domains. Evolution of new conceptual knowledge layers As mentioned before, the rise of the World Wide Web means only that the interacting machines share the same communication alphabets. Most of the information in the W W W does not mean anything to computers. Only if there is an application that analyses the transmitted content, can the machine itself produce new conceptual knowledge. There are of course, business applications that transfer factual knowledge (for instance an electronic invoice) to another computer that is able to ‘understand’ this piece of information, but this development has only just begun. The problem is not of a technical nature any more, because the communication standards (TCP/IP, XML, SOAP and others) are well
34

established. The most important barriers are of a semantic or conceptual nature. As in the example of Section 4, in industries around the world there exist many different concepts about an ‘invoice’, an ‘order’, a ‘shipping note’ or other business objects. As long as these differences are not settled, machines cannot ‘talk’ to each other on a conceptual basis. Several organisations have tried to address this problem35. If they succeed, business computers around the world will participate in the same business knowledge domains and they could be enabled to autonomously do business around the world. Besides the business knowledge domains, of course, many other knowledge domains are not yet accessible to machines. The W W W is full of such knowledge. In 2004, Tim BernersLee, the inventor of the World Wide Web, proposed the so-called Semantic Web36. The basic idea is to enable computers to analyse the conceptual knowledge of the Web. It will then be possible for machines to derive new knowledge by combination or 'serendipity'37. Every Internet-connected agent will then have immediate access to the information needed in her/his/its actual context, if she/he/it divulges information about her/his/its context. One condition for the implementation of the Semantic Web is the development of ontologies and knowledge representation concepts38. If the Semantic Web becomes reality, it will inevitably push the global unification of knowledge domains, because contradictions resulting from different basic concepts will be eliminated by design. Another future source of conceptual knowledge is called 'ubiquitous computing' or 'pervasive computing’39. The basic idea is that information processing is thoroughly integrated into everyday objects. Such objects (e. g. clothes, cars, buildings, furniture, and so on) are invisibly tagged, they have their own online presence, they can communicate with each other, and they can exchange information about their physical and virtual environment. By this means, it is possible for machines to collect information and derive knowledge, that humans don't even know of. Gottschalk-Mazouz40 highlights some ethical aspects of this kind of knowledge propagation. He also names typical features of knowledge that are compatible with the definitions in this article. In principle we can expect that machines get their own senses. In scientific research (physics, biology, astronomy, meteorology and others) they already sense facts about our world (microcosm and macrocosm), that we cannot directly observe by means of our own sensory. Scale-free networks and ' Correlative Analytics' So far, we have not assumed anything about the topology of the global network of knowledge. However, new results in the theory of networks should have an important impact on the field of social and evolutionary epistemology of theories. Especially the branch of the so-called scale-free network research, introduced by Albert-László Barabási (see for instance BARABASI, Albert-László (2004)) sheds new light on many scientific disciplines, such as biology, physics, computer science and social sciences and consequently on epistemology. The most stunning result was that complex networks tend to be scale-free. This means that the whole structure of the network evolves towards so-called hubs, i.e. nodes in the network that are linked to an enormous number of other nodes. The more links a node possesses, the more likely other nodes tend to attach to these hubs. This phenomenon is called preferential attachment. In the World Wide Web for instance, some of the hubs are the sites of 'Google', 'Yahoo', 'Microsoft' and others. In his article, Scale-Free Networks: A Decade and Beyond (see BARABASI, Albert-László (2009)) Barabasi writes:
see for instance http://www.ebxml.org/geninfo.htm/ or http://www.oasis-open.org/who/ BERNERS-LEE, Tim (2004) 37 meaning 'the discovery through chance by a theoretically prepared mind of valid findings which were not sought for' (see MERTON, Robert K. (1957)) 38 see for instance DAVIES, John and STUDER, Rudi and WARRREN, Paul (2006) 39 POSLAD, Stefan (2009) 40 GOTTSCHALK-MAZOUZ, Nils (2007)
36 35

beyond human capabilities. All connected agents get more and better access to transformational and factual knowledge. If we assume that these trends will continue, any agent will have immediate access to all knowledge required at any moment of her/his/its lifetime. In such a scenario, we will not be able to differentiate between the knowledge of a single agent and the knowledge of the overall noosphere. Knowledge will simply come out of the 'cloud'41 or out of the noosphere and we are part of it. It would not be relevant where the knowledge comes from and human brains would not need to 'burden' themselves with factual knowledge they do not actually use. Knowledge would migrate from the individual's memory to the environment. The demarcation between an individual's knowledge and the environment would blur. This would be a practical affirmation of the theoretical model, according to which there is no principal difference between the ontogeny of a single individual's knowledge and the phylogeny of knowledge corpora. Another important epistemic consequence of the semantic web and 'correlative analytics' is that we will not be able to identify the source of knowledge any more. Therefore, we will not be able to ask any individuals for their 'justification'. 7 The Omega Point

In the previous Section, we discussed the current and near future evolution of the noosphere. One result was that the knowledge domains develop on a global scale, some evolve and converge very rapidly, some vanish, and new domains emerge. However, it is not at all clear, whether this will lead towards Teilhard’s vision of the Omega Point, according to which humankind could be united (‘in several million years’) in a harmonised world view. As some research results about the World Wide Web indicate, scale-free networks (with directed edges) can be 'fragmented'; this means that large parts of the web are disconnected from each other (see BARABASI, Albert-László (2004)). The overall structure of the network of knowledge, the noosphere, is unknown until now, but we may assume, that there also exists a multitude of disconnected knowledge domains, because the propagation of knowledge relies heavily on the Internet and W W W. Moreover, the propagation and evolution of knowledge are dynamic properties of the noosphere and research on the dynamics of complex networks has just begun. Finally yet importantly, the 'success' may depend on the nature and quality of the different knowledge domains. Organised crime, terrorism, dictatorial regimes and so on, they all have their own knowledge domains and they are all eager to protect them against their enemies. The worldwide propagation of knowledge may help to undermine the power of oppressive structures. However, as long as the usage of the world's natural resources discriminates against large parts of the world, new (knowledge and physical) conflicts will always arise and Teilhard's vision cannot come true. Although we do not know whether Teilhard is right or not, it is an interesting thought experiment, what Teilhard's Omega Point would be like from the framework's point of view. If the Omega Point became reality, every single agent (humans and technical devices) would be connected to the noosphere. The noosphere would be global and it would be free of obvious contradictions. Every agent would live in harmony with every other agent she/he/it is interacting with. Each agent's perception of the world would be perfectly compatible with all knowledge (especially scientific knowledge) about the world. Every single observation and every single interaction of an agent with nature (even with her/his/its own physical body) would immediately contribute to the perception and if necessary to the adaptation of the whole noosphere. The main goal of the noosphere would be to survive the challenges of nature and the universe.