Archive for the 'Computing-as-interaction' Category

Gibson Hall, London, venue for DevCon1, 9-13 November 2015. There was some irony in holding a conference to discuss technology developments in blockchains and distributed ledgers in a grand, neo-classical heritage-listed building erected in 1865. At least it was fitting that a technology currently taking the financial world by storm should be debated in what was designed to be a banking hall (for Westminster Bank). The audience was split fairly evenly between dreadlocked libertarians & cryptocurrency enthusiasts and bankers & lawyers in smart suits: cyberpunk meets Gordon Gekko.

Most people, if they think about the topic at all, probably imagine computer science involves the programming of computers. But what are computers? In most cases, these are just machines of one form or another. And what is programming? Well, it is the issuing of instructions (“commands” in the programming jargon) for the machine to do something or other, or to achieve some state or other. Thus, I view Computer Science as nothing more or less than the science of delegation.

When delegating a task to another person, we are likely to be more effective (as the delegator or commander) the more we know about the skills and capabilities and curent commitments and attitudes of that person (the delegatee or commandee). So too with delegating to machines. Accordingly, a large part of theoretical computer science is concerned with exploring the properties of machines, or rather, the deductive properties of mathematical models of machines. Other parts of the discipline concern the properties of languages for commanding machines, including their meaning (their semantics) – this is programming language theory. Because the vast majority of lines of program code nowadays are written by teams of programmers, not individuals, then much of computer science – part of the branch known as software engineering – is concerned with how to best organize and manage and evaluate the work of teams of people. Because most machines are controlled by humans and act in concert for or with or to humans, then another, related branch of this science of delegation deals with the study of human-machine interactions. In both these branches, computer science reveals itself to have a side which connects directly with the human and social sciences, something not true of the other sciences often grouped with Computer Science: pure mathematics, physics, or chemistry.

And from its modern beginnings 70 years ago, computer science has been concerned with trying to automate whatever can be automated – in other words, with delegating the task of delegating. This is the branch known as Artificial Intelligence. We have intelligent machines which can command other machines, and manage and control them in the same way that humans could. But not all bilateral relationships between machines are those of commander-and-subordinate. More often, in distributed networks machines are peers of one another, intelligent and autonomous (to varying degrees). Thus, commanding is useless – persuasion is what is needed for one intelligent machine to ensure that another machine does what the first desires. And so, as one would expect in a science of delegation, computational argumentation arises as an important area of study.

In July 2005, inspired by a talk on formation flying by unmanned aircraft by Sandor Veres at the Liverpool Agents in Space Symposium, I wrote down some rules of thumb I have been using informally for determining whether an agent-based modeling (ABM) approach is appropriate for a particular application domain. Appropriateness is assessed by answering the following questions:

1. Are there multiple entities in the domain, or can the domain be represented as if there are?
2. Do the entities have access to potentially different information sources or do they have potentially different beliefs? For example, differences may be due to geographic, temporal, legal, resource or conceptual constraints on the information available to the entities.
3. Do the entities have potentially different goals or objectives? This will typically be the case if the entities are owned or instructed by different people or organizations.
4. Do the entities have potentially different preferences (or utilities) over their goals or objectives ?
5. Are the relationships between the entities likely to change over time?
6. Does a system representing the domain have multiple threads of control?

If the answers are YES to Question 1 and also YES to any other question, then an agent-based approach is appropriate. If the answer to Question 1 is NO, or if the answers are YES to Question 1 but NO to all other questions, then a traditional object-based approach is more appropriate.

Traditional object-oriented systems involve static relationships between non-autonomous entities sharing the same beliefs, preferences and goals, and in a system with a single thread of control.

What are models for? Most developers and users of models, in my experience, seem to assume the answer to this question is obvious and thus never raise it. In fact, modeling has many potential purposes, and some of these conflict with one another. Some of the criticisms made of particular models arise from mis-understandings or mis-perceptions of the purposes of those models, and the modeling activities which led to them.

Liking cladistics as I do, I thought it useful to list all the potential purposes of models and modeling. The only discussion that considers this topic that I know is a brief discussion by game theorist Ariel Rubinstein in an appendix to a book on modeling rational behaviour (Rubinstein 1998). Rubinstein considers several alternative purposes for economic modeling, but ignores many others. My list is as follows (to be expanded and annotated in due course):

1. To better understand some real phenomena or existing system. This is perhaps the most commonly perceived purpose of modeling, in the sciences and the social sciences.

2. To predict (some properties of) some real phenomena or existing system. A model aiming to predict some domain may be successful without aiding our understanding of the domain at all. Isaac Newton’s model of the motion of planets, for example, was predictive but not explanatory. I understand that physicist David Deutsch argues that predictive ability is not an end of scientific modeling but a means, since it is how we assess and compare alternative models of the same phenomena. This is wrong on both counts: prediction IS an end of much modeling activity (especially in business strategy and public policy domains), and it not the only means we use to assess models. Indeed, for many modeling activities, calibration and prediction are problematic, and so predictive capability may not even be possible as a form of model assessment.

3. To manage or control (some properties of) some real phenomena or existing system.

4. To better understand a model of some real phenomena or existing system. Arguably, most of economic theorizing and modeling falls into this category, and Rubinstein’s preferred purpose is this type. Macro-economic models, if they are calibrated at all, are calibrated against artificial, human-defined, variables such as employment, GDP and inflation, variables which may themselves bear a tenuous and dynamic relationship to any underlying economic reality. Micro-economic models, if they are calibrated at all, are often calibrated with stylized facts, abstractions and simplifications of reality which economists have come to regard as representative of the domain in question. In other words, economic models are not not usually calibrated against reality directly, but against other models of reality. Similarly, large parts of contemporary mathematical physics (such as string theory and brane theory) have no access to any physical phenomena other than via the mathematical model itself: our only means of apprehension of vibrating strings in inaccessible dimensions beyond the four we live in, for instance, is through the mathematics of string theory. In this light, it seems nonsense to talk about the effectiveness, reasonable or otherwise, of mathematics in modeling reality, since how we could tell?

5. To predict (some properties of) a model of some real phenomena or existing system.

6. To better understand, predict or manage some intended (not-yet-existing) artificial system, so to guide its design and development. Understanding a system that does not yet exist is qualitatively different to understanding an existing domain or system, because the possibility of calibration is often absent and because the model may act to define the limits and possibilities of subsequent design actions on the artificial system. The use of speech act theory (a model of natural human language) for the design of artificial machine-to-machine languages, or the use of economic game theory (a mathematical model of a stylized conceptual model of particular micro-economic realities) for the design of online auction sites are examples here. The modeling activity can even be performative, helping to create the reality it may purport to describe, as in the case of the Black-Scholes model of options pricing.

7. To provide a locus for discussion between relevant stakeholders in some business or public policy domain. Most large-scale business planning models have this purpose within companies, particularly when multiple partners are involved. Likewise, models of major public policy issues, such as epidemics, have this function. In many complex domains, such as those in public health, models provide a means to tame and domesticate the complexity of the domain. This helps stakeholders to jointly consider concepts, data, dynamics, policy options, and assessment of potential consequences of policy options, all of which may need to be socially constructed.

8. To provide a means for identification, articulation and potentially resolution of trade-offs and their consequences in some business or public policy domain. This is the case, for example, with models of public health risk assessment of chemicals or new products by environmental protection agencies, and models of epidemics deployed by government health authorities.

9. To enable rigorous and justified thinking about the assumptions and their relationships to one another in modeling some domain. Business planning models usually serve this purpose. They may be used to inform actions, both to eliminate or mitigate negative consequences and to enhance positive consequences, as in retroflexive decision making.

10. To enable a means of assessment of managerial competencies of the people undertaking the modeling activity. Investors in start-ups know that the business plans of the company founders are likely to be out of date very quickly. The function of such business plans is not to model reality accurately, but to force rigorous thinking about the domain, and to provide a means by which potential investors can challenge the assumptions and thinking of management as way of probing the managerial competence of those managers. Business planning can thus be seen to be a form of epideictic argument, where arguments are assessed on their form rather than their content, as I have argued here.

11. As a means of play, to enable the exercise of human intelligence, ingenuity and creativity, in developing and exploring the properties of models themselves. This purpose is true of that human activity known as doing pure mathematics, and perhaps of most of that academic activity known as doing mathematical economics. As I have argued before, mathematical economics is closer to theology than to the modeling undertaken in the natural sciences. I see nothing wrong with this being a purpose of modeling, although it would be nice if academic economists were honest enough to admit that their use of public funds was primarily in pursuit of private pleasures, and any wider social benefits from their modeling activities were incidental.

POSTSCRIPT (Added 2011-06-17): I have just seen Joshua Epstein’s 2008 discussion of the purposes of modeling in science and social science. Epstein lists 17 reasons to build explicit models (in his words, although I have added the label “0” to his first reason):

Charles Leonard Hamblin (1922-1985) was an Australian philosopher and one of Australia’s first computer scientists. His main early contributions to computing, which date from the mid 1950s, were the development and application of reverse polish notation and the zero-address store. He was also the developer of one of the first computer languages, GEORGE. Since his death, his ideas have become influential in the design of computer interaction protocols, and are expected to shape the next generation of e-commerce and machine-communication systems.

In the post below, I mentioned the challenge for knowledge engineers of representing know-how, a task which may require explicit representation of actions, and sometimes also of utterances over actions. The know-how involved in steering a large sailing ship with its diverse crew surely includes the knowledge of who to ask (or to command) to do what, when, and how to respond when these requests (or commands) are ignored, or fail to be executed successfully or timeously.

One might imagine epistemology – the philosophy of knowledge – would be of help here. Philosophers, however, have been seduced since Aristotle with propositions (factual statements about the world having truth values), largely ignoring actions, and their representation. Philosophers of language have also mostly focused on speech acts – utterances which act to change the world – rather than on utterances about actions themselves. Even among speech act theorists the obsession with propositions is strong, with attempts to analyze utterances which are demonstrably not propositions (eg, commands) by means of implicit assertive statements – propositions asserting something about the world, where “the world” is extended to include internal mental states and intangible social relations between people – which these utterances allegedly imply. With only a few exceptions (Thomas Reid 1788, Adolf Reinach 1913, Juergen Habermas 1981, Charles Hamblin 1987), philosophers of language have mostly ignored utterances about actions.

Consider the following two statements:

I promise you to wash the car.

I command you to wash the car.

The two statements have almost identical English syntax. Yet their meanings, and the intentions of their speakers, are very distinct. For a start, the action of washing the car would be done by different people – the speaker and the hearer, respectively (assuming for the moment that the command is validly issued, and accepted). Similarly, the power to retract or revoke the action of washing the car rests with different people – with the hearer (as the recipient of the promise) and the speaker (as the commander), respectively.

Linguists generally use “semantics” to refer to the real-world referants of syntactically-correct expressions, while “pragmatics” refers to other aspects of the meaning and use of an expression not related to their relationship (or not) to things in the world, such as the speaker’s intentions. For neither of these two expressions does it make sense to speak of their truth value: a promise may be questioned as to its sincerity, or its feasibility, or its appropriateness, etc, but not its truth or falsity; likewise, a command may be questioned as to its legal validity, or its feasibility, or its morality, etc, but also not its truth or falsity.

For utterances about actions, such as promises, requests, entreaties and commands, truth-value semantics makes no sense. Instead, we generally need to consider two pragmatic aspects. The first is uptake, the acceptance of the utterance by the hearer (an aspect first identified by Reid and by Reinach), an acceptance which generally creates a social commitment to execute the action described in the utterance by one or other party to the conversation (speaker or hearer). Once uptaken, a second pragmatic aspect comes into play: the power to revoke or retract the social commitment to execute the action. This revocation power does not necessarily lie with the original speaker; only the recipient of a promise may cancel it, for example, and not the original promiser. The revocation power also does not necessarily lie with the uptaker, as commands readily indicate.

Why would a computer scientist be interested in such humanistic arcana? The more tasks we delegate to intelligent machines, the more they need to co-ordinate actions with others of like kind. Such co-ordination requires conversations comprising utterances over actions, and, for success, these require agreed syntax, semantics and pragmatics. To give just one example: the use of intelligent devices by soldiers have made the modern battlefield a place of overwhelming information collection, analysis and communication. Lots of this communication can be done by intelligent software agents, which is why the US military, inter alia, sponsors research applying the philosophy of language and the philosophy of argumentation to machine communications.

Meanwhile, the philistine British Government intends to cease funding tertiary education in the arts and the humanities. Even utilitarians should object to this.

An orrery is a machine for predicting the movements of heavenly bodies. The oldest known orrery is the Antikythera Mechanism, created in Greece around 2100 years ago, and rediscovered in 1901 in a shipwreck near the island of Antikythera (hence its name). The high-quality and precision nature of its components would indicate that this device was not unique, since the making of high-quality mechanical components is not trivial, and is not usually achieved with just one attempt (something Charles Babbage found, and which delayed his development of computing machinery immensely).

It took until 2006 and the development of x-ray tomography for a plausible theory of the purpose and operations of the Antikythera Mechanism to be proposed (Freeth et al. 2006). The machine was said to be a physical examplification of late Greek theories of cosmology, in particular the idea that the motion of a heavenly body could be modeled by an epicycle – ie, a body traveling around a circle, which is itself moving around some second circle. This model provided an explanation for the fact that many heavenly bodies appear to move at different speeds at different times of the year, and sometimes even (appear to) move backwards.

There have been two recent developments: One is the re-creation of the machine (or, rather, an interpretation of it) using lego components.

The second has arisen from a more careful examination of the details of the mechanism. According to Marchant (2010), some people now believe that the mechanism examplifies Babylonian, rather than Greek, cosmology. Babylonian astronomers modeled the movements of heavenly bodies by assuming each body traveled along just one circle, but at two different speeds: movement in one period of the year being faster than during the other part of the year.

If this second interpretation of the Antikythera Mechanism is correct, then perhaps it was the mechanism itself (or others like it) which gave late Greek astronomers the idea for an epicycle model. In support of this view is the fact that, apparently, gearing mechanisms and the epicycle model both appeared around the same time, with gears perhaps a little earlier. So late Greek cosmology (and perhaps late geometry) may have arisen in response to, or at least alongside, practical developments and physical models. New ideas in computing typically follow the same trajectory – first they exist in real, human-engineered, systems; then, we develop a formal, mathematical theory of them. Programmable machines, for instance, were invented in the textile industry in the first decade of the 19th century (eg, the Jacquard Loom), but a mathematical theory of programming did not appear until the 1960s. Likewise, we have had a fully-functioning, scalable, global network enabling multiple, asynchronous, parallel, sequential and interleaved interactions since Arpanet four decades ago, but we still lack a thorough mathematical theory of interaction.

And what have the Babylonians ever done for us? Apart from giving us our units for measuring of time (divided into 60) and of angles (into 360 degrees)?

Thanks to the ever-watchful Normblog, I encounter an article by Colin Tatz inveighing against talk about sport. Norm is right to call Tatz to account for writing nonsense – talk about sport is just as meaningful as talk about politics, history, religion, nuclear deterrence, genocide, or any other real-world human activity. Tatz says:

Sport is international phatic but also a crucial Australian (male) vehicle. It enables not just short, passing greetings but allows for what may seem like deep, passionate and meaningful conversations but which in the end are unmemorable, empty, producing nothing and enhancing no one.

Unmemorable?! Really? What Australian could forget Norman May’s shouted “Gold! Gold for Australia! Gold!” commentary at the end of the men’s 400-metre swimming medley at the 1980 Olympics in Moscow. Only a churlish gradgrind could fail to be enhanced by hearing this. And what Australian of a certain age could forget the inimitable footie commentary of Rex Mossop, including, for example, such statements as, “That’s the second consecutive time he’s done that in a row one straight after the other.” Mossop’s heat-of-the-moment sporting talk was commemorated with his many winning places in playwright Alex Buzo’s Australian Indoor Tautology Pennant, an annual competition held, as I recall, in Wagga Wagga, Gin Gin and Woy Woy (although not in Woop Woop or in The Never Never), before moving internationally to exotic locations such as Pago Pago, Xai Xai and Baden Baden. Unmemorable, Mr Tatz? Enhancing no one? Really? To be clear, these are not memorable sporting events, but memorable sporting commentary. And all I’ve mentioned so far is sporting talk, not the great writers on baseball, on golf, on cricket, on swimming, . . .

But as well as misunderstanding what talk about sport is about and why it is meaningful, Tatz is wrong on another score. He says:

But why so much natter and clatter about sport? Eco’s answer is that sport “is the maximum aberration of ‘phatic’ speech”, which is really a negation of speech.

Phatic speech is meaningless speech, as in “G’day, how’s it going?” or “have a nice day” or “catch you later” — small talk phrases intended to produce a sense of sociability, sometimes uttered in the hope that it will lead to further and more real intercourse, but human enough even if the converse goes no further.

Phatic communications are about establishing and maintaining relationships between people. Such a purpose is the very essence of speech communication, not its negation. Tatz, I fear, has fallen into the trap of so many computer scientists – to focus on the syntax of messages, and completely ignore their semantics and pragmatics. The syntax of messages concerns their surface form, their logical structure, their obedience (or not) to rules which determine whether they are legal and well-formed statements (or not) in the language they purport to arise from. The semantics of utterances concerns their truth or falsity, in so far they describe real objects in some world (perhaps the one we all live in, or some past, future or imagined world), while their pragmatics concerns those aspects of their meaning unrelated to their truth status (for example, who has power to revoke or retract them).

I have discussed this syntax-is-all-there-is mistake before. I believe the root causes of this mistaken view are two-fold: the mis-guided focus of philosophers these last two centuries on propositions to the exclusion of other types of utterances and statements (of which profound error Terry Eagleton has shown himself guilty), and the mis-guided view that we now live in some form of Information Society, a view which wrongly focuses attention on the information transferred by utterances to the exclusion of any other functions that utterances may serve or any other things we agents (people and machines) may be doing and aiming to do when we talk. If you don’t believe me about the potentially complex functionality of utterances, even when viewed as nothing more than the communication of factual propositions, then read this simple example.

If communications were only about the transfer of explicit information, then life would be immensely less interesting. It would also not be human life, for we would be no more intelligent than desktop computers passing HTTP requests and responses to one another.

The Internet, the World-Wide-Web and hypertext were all forecast by Vannevar Bush, in a July 1945 article for The Atlantic, entitled As We May Think. Perhaps this is not completely surprising since Bush had a strong influence on WW II and post-war military-industrial technology policy, as Director of the US Government Office of Scientific Research and Development. Because of his influence, his forecasts may to some extent have been self-fulfilling.

However, his article also predicted automated machine reasoning using both logic programming, the computational use of formal logic, and computational argumentation, the formal representation and manipulation of arguments. These areas are both now important domains of AI and computer science which developed first in Europe and which still much stronger there than in the USA. An excerpt:

The scientist, however, is not the only person who manipulates data and examines the world about him by the use of logical processes, although he sometimes preserves this appearance by adopting into the fold anyone who becomes logical, much in the manner in which a British labor leader is elevated to knighthood. Whenever logical processes of thought are employed—that is, whenever thought for a time runs along an accepted groove—there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students’ souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine.

Logic can become enormously difficult, and it would undoubtedly be well to produce more assurance in its use. The machines for higher analysis have usually been equation solvers. Ideas are beginning to appear for equation transformers, which will rearrange the relationship expressed by an equation in accordance with strict and rather advanced logic. Progress is inhibited by the exceedingly crude way in which mathematicians express their relationships. They employ a symbolism which grew like Topsy and has little consistency; a strange fact in that most logical field.

A new symbolism, probably positional, must apparently precede the reduction of mathematical transformations to machine processes. Then, on beyond the strict logic of the mathematician, lies the application of logic in everyday affairs. We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register. But the machine of logic will not look like a cash register, even of the streamlined model.”

Edinburgh sociologist, Donald MacKenzie, wrote a nice history and sociology of logic programming and the use of logic of computer science, Mechanizing Proof: Computing, Risk, and Trust. The only flaw of this fascinating book is an apparent misunderstanding throughout that theorem-proving by machines refers only to proving (or not) of theorems in mathematics. Rather, theorem-proving in AI refers to proving claims in any domain of knowledge represented by a formal, logical language. Medical expert systems, for example, may use theorem-proving techniques to infer the presence of a particular disease in a patient; the claims being proved (or not) are theorems of the formal language representing the domain, not necessarily mathematical theorems.