Wikipedia gains its power by hosting a large amount of useful information (prominently ranked by search engines) that is tightly controlled by censorship.

However, according to Paulo Correa, Alexandra Correa and Malgosia Askanas [2005]:

If the “ranking” users - those that are more equal than the others - do not attain this position based on their expertise, what, then, is their “rank” based on? It is based on their devotion to Wikipedia-itself-as-social-dogma, on the amount of time they spend dutifully performing tedious maintenance chores, on their bureaucratic zealotry and their policial aspirations. In other words, in Wikipedia, ultimate decisions about what constitutes “encyclopedic fact” and what constitutes “vandalism” devolve to a cadre of Internet bureaucrats with no other qualifications than their devotion to Wikipedianism.…One of the main problems stems precisely from the fact that Wikipedia's de-facto arbiters of what constitutes “science”, “information”, “fact”, “knowledge” - those who make it into the ranks of Wikipedia administrators, and who have the time and persistence to win any “edit war” - are Internet technobureaucrats without any actual love of knowledge or any respect for those who spend their life fighting for it. What these people mean by “knowledge” is a certain type of mainstream opinion, shaped by the latest trends in Google, Nature, Wired, NASA, the Sierra Club, etc. Wikipedia, in spite of its much-waved banner of “Neutral Point of View”, is permeated by a systemic bias. “Neutral point of view”, in Wikipedia, denotes a point of view that represents the 70th-percentile “consensus” of Web 2.0 technobureaucratic opinion.(emphases added)

Saturday, August 23, 2008

Logic Programming can be broadly defined as “using logic to deduce computational steps from existing propositions” (although this is somewhat controversial). The focus of this paper is on the development of this idea. Consequently, it does not treat any other associated topics related to Logic Programming such as constraints, abduction, etc.

The idea has a long development that went through many twists in which important questions turned out to have surprising answers including the following:

Is computation reducible to logic?

Are the laws of thought consistent?

This paper describes what went wrong at various points, what was done about it, and what it might mean for the future of Logic Programming.

This paper develops a very powerful formalism (called Direct Logic™) that incorporates the mathematics of Computer Science and allows unstratified inference and reflection using mathematical induction for almost all of classical logic to be used. Direct Logic allows mutual reflection among the code, documentation, and use cases of large software systems thereby overcoming the limitations of the traditional Tarskian framework of stratified metatheories.

Gödel first formalized and proved that it is not possible to decide all mathematical questions by inference in his 1st incompleteness theorem. However, the incompleteness theorem (as generalized by Rosser) relies on the assumption of consistency! This paper proves a generalization of theGödel/Rosser incompleteness theorem: a strongly paraconsistent theory is incomplete. However, there is a further consequence: Although the semi-classical mathematical fragment of Direct Logic is evidently consistent, since the Gödelian paradoxical proposition is self-provable, every reflective strongly paraconsistent theory in Direct Logic is inconsistent!

This paper also proves that Logic Programming is not computationally universal in that there are concurrent programs for which there is no equivalent in Direct Logic. Thus the Logic Programming paradigm is strictly less general than the Procedural Embedding of Knowledge paradigm.

For example, extension and revision is required of the fundamental assumption of the Event Calculus: Time-varying properties hold at particular time-points if they have been initiated by an action at some earlier time-point, and not terminated by another action in the meantime. The fundamental assumption of the Event Calculus is overly simplistic when it comes to organizations in which time-varying properties have to be actively maintained and managed in order to continue to hold and termination by another action is not required for a property to no longer hold. I.e., if active measures are not taken then things will go haywire by default. Consequently the Event Calculus approach must evolve into a strongly paraconsistent system structured around participations in space-time.

Similarly extension and revision is required for Model Checking properties of systems. Previously Model Checking as been performed using the model of nondeterministic automata based on states determined by time-points. These nondeterministic automata are not suitable for organizations, which are highly structured and operate asynchronously with only loosely bounded nondeterminism. Consequently Model Checking needs to evolve in the direction of verifying participatory behavior in Organizations.

Organizational Computing is a computational model for using the principles, practices, and methods of human organizations. Organizations of Restricted Generality (ORGs) have been proposed as a foundation for Organizational Computing.ORGs are the natural extension of Web Services, which are rapidly becoming the overwhelming standard for distributed computing and application interoperability in Organizational Computing. The thesis of this paper is that large-scale Organizational Computing requires reflection and strong paraconsistency for organizational practices, policies, and norms.Strong paraconsistency is required because the practices, policies, and norms of large-scale Organizational Computing are pervasively inconsistent.By the standard rules of logic, anything and everything can be inferred from an inconsistency, e.g., “The moon is made of green cheese.” The purpose of strongly paraconsistent logic is to develop principles of reasoning so that irrelevances cannot be inferred from the fact of inconsistency while preserving all natural inferences that do not explode in the face of inconsistency.

Reflection is required in order that the practices, policies, and norms can mutually refer to each other and make inferences. Reflection and strong paraconsistency are important properties of Direct Logic [Hewitt 2007] for large software systems. Gödel first formalized and proved that it is not possible to decide all mathematical questions by inference in his 1st incompleteness theorem. But the incompleteness theorem (as generalized by Rosser) relies on the assumption of consistency! This paper proves a generalization of the Gödel/Rosser incompleteness theorem:theories of Direct Logic are incomplete. However, there is a further consequence. Although the semi-classical mathematical fragment of Direct Logic is evidently consistent, since the Gödelian paradoxical proposition is self-provable, every theory in Direct Logic has an inconsistency!

For example, extension and revision is required of the fundamental assumption of the Event Calculus: Time-varying properties hold at particular time-points if they have been initiated by an action at some earlier time-point, and not terminated by another action in the meantime. The fundamental assumption of the Event Calculus is overly simplistic when it comes to organizations in which time-varying properties have to be actively maintained and managed in order to continue to hold and termination by another action is not required for a property to no longer hold. I.e., if active measures are not taken then things will go haywire by default. Consequently the Event Calculus approach must evolve into a strongly paraconsistent system structured around participations in space-time.

Similarly extension and revision is required for Model Checking properties of systems. Previously Model Checking as been performed using the model of nondeterministic automata based on states determined by time-points. These nondeterministic automata are not suitable for organizations, which are highly structured and operate asynchronously with only loosely bounded nondeterminism. Consequently Model Checking needs to evolve in the direction of verifying participatory behavior in Organizations.